Big Data Online Practice Test - 5
This Test will cover complete Big Data with very important questions, starting off from basics to advanced level.
Q. Mentioned below,are a few tasks,present in Hadoop:
1) Gets fsimage and editlogs,from Namenode, at regular intervals,and,merges them.
2) Writes editlogs of Namenode,in fsimage file.
3) Uploads the merged FsImage,to the Namenode.
4) Create a new fsimage,for the namenode.
Mark,if they are performed,by checkpoint node(C) or secondary namenode(S).
|
|
|
|
Q. A developer working in MongoDb, was told to write a query,such that,for document with _id 5,two elements in the Ports collection(Pipavav,Vizag), are removed,from the ShipDock array.
Mark the correct syntax:
Q. In an interview,following code snippets were given:
object MyClass {
def main(args: Array[String]) {
println("ABCDEF".indexOf('B',0));
println("ABCDEF".indexOf("CD"));
println("ABCDEF".indexOf('A',2));
}
}
Mark the correct output
Q. In a quiz,the following fill in the blanks were provided to the freshers:
1) If there are multiple Mapreduce and Pig programs to be executed in a sequence,one can use the __________ framework.
2) __________ is a NoSql database for Hadoop,that allows,record level updates.
3) __________ takes different data,and,places it on different machines.
4) __________ is responsible for Cluster management.
Mark the correct output
|
|
|
|
Q. I am a query engine for big data.
I use a JSON document model, which enables developers,to query data of any structure.
I am inspired by Dremel,the query engine,that allows to run,massive SQL like queries,on huge amount of data.
I can be integrated with,local files,HDFS,NAS,Amazon S3 etc.
Which framework am I?
Q. A lead,was explaining to his team,the concept of transformations,in Spark.He mentioned the following points:
1) In narrow transformation, any single child row, will depend on only 1 parent row.
2) In wide transformation,data is mapped from one partition to many partitions.
3) Union,distinct etc are examples of wide transformations.
4) Narrow transformations,are also called as,shuffle transformations.
Map which of them are true or false.
|
|
|
|
Q. A developer,was testing the Mapreduce knowledge,of his peer. He told him,to point out the basic input and output parameters ,of a mapper class(say WordCount program).Options given were:
1) IntWritable
2) Text
3) LongWritable
4) FloatWritable
Mark the correct answer.
Q. In an interview,following code snippets were provided:
1) select RPAD('India',6,'*') from nations;
2) select REPEAT('star',2);
3) select INSTR("condition", 'd');
Mark the correct output.
Q. A developer was explaining concepts on HBase tables to his peer:
1) Before changing any table settings,one should disable it first.
2) We need to disable a table,before using a drop command,on it.
3) It is not necessary,to disable a table,which we want to delete.
4) If a table is disabled,we can check for its existence,with the 'exists' command.
Mark which of them are valid or invalid statements about HBase statements.
|
|
|
|
Q. Mark,which of the following statements,are true or false,about PigStorage:
1) It is the default load function,in Pig framework.
2) It allows to load data,from file system to the Pig.
3) It does not allows,to specify the schema of the data,along with its type.
4) It allows to mention,the delimiter of data, while, loading it in Pig.
Choose the appropriate answer
|
|
|
|