AllExam Dumps

DUMPS, FREE DUMPS, VCP5 DUMPS| VMWARE DUMPS, VCP DUMPS, VCP4 DUMPS, VCAP DUMPS, VCDX DUMPS, CISCO DUMPS, CCNA, CCNA DUMPS, CCNP DUMPS, CCIE DUMPS, ITIL, EXIN DUMPS,


READ Free Dumps For Cloudera- CCD-410





Question ID 12485

MapReduce v2 (MRv2/YARN) is designed to address which two issues?

Option A

Single point of failure in the NameNode.

Option B

 Resource pressure on the JobTracker.

Option C

HDFS latency.

Option D

Ability to run frameworks other than MapReduce, such as MPI.

Option E

Reduce complexity of the MapReduce APIs.

Option F

Standardize on a single MapReduce API.

Correct Answer A,B
Explanation Reference: Apache Hadoop YARN – Concepts & Applications


Question ID 12486

How are keys and values presented and passed to the reducers during a standard sort and
shuffle phase of MapReduce?

Option A

Keys are presented to reducer in sorted order; values for a given key are not sorted.

Option B

Keys are presented to reducer in sorted order; values for a given key are sorted in ascending order.

Option C

Keys are presented to a reducer in random order; values for a given key are not sorted.

Option D

Keys are presented to a reducer in random order; values for a given key are sorted in ascending order.

Correct Answer A
Explanation Explanation: Reducer has 3 primary phases: 1. Shuffle The Reducer copies the sorted output from each Mapper using HTTP across the network. 2. Sort The framework merge sorts Reducer inputs by keys (since different Mappers may have output the same key). The shuffle and sort phases occur simultaneously i.e. while outputs are being fetched they are merged. SecondarySort To achieve a secondary sort on the values returned by the value iterator, the application should extend the key with the secondary key and define a grouping comparator. The keys will be sorted using the entire key, but will be grouped using the grouping comparator to decide which keys and values are sent in the same call to reduce. 3. Reduce In this phase the reduce(Object, Iterable, Context) method is called for each in the sorted inputs. The output of the reduce task is typically written to a RecordWriter via TaskInputOutputContext.write(Object, Object). The output of the Reducer is not re-sorted. Reference: org.apache.hadoop.mapreduce, Class Reducer