4. Leveraging a private vs. public cloud may result in sacrificing some of the core advantages of cloud computing
Correct Answer : Get Lastest Questions and Answer : Explanation: Private clouds do not make sense for small businesses. But for large and even medium-sized businesses, IT teams can make parts of their infrastructures virtual, so they can use their business processes and computer resources in a private cloud. As the concept becomes more mature, the idea would be to move everything that needs more flexibility to the cloud.
Question : In Hadoop framework , you know there are 's of nodes, working as a data nodes. Now, while putiing data in the cluster, it will be decided by NameNode , on which node and which rack data should be copied? Which of the following will help NameNode to find the correct node in a rack ? 1. Admin has to do the pre-configuration on a NameNode
Hadoop components are rack-aware. For example, HDFS block placement will use rack awareness for fault tolerance by placing one block replica on a different rack. This provides data availability in the event of a network switch failure or partition within the cluster.
Hadoop master daemons obtain the rack id of the cluster slaves by invoking either an external script or java class as specified by configuration files. Using either the java class or external script for topology, output must adhere to the java org.apache.hadoop.net.DNSToSwitchMapping interface. The interface expects a one-to-one correspondence to be maintained and the topology information in the format of ˜/myrack/myhost, where ˜/ is the topology delimiter, ˜myrack is the rack identifier, and ˜myhost is the individual host. Assuming a single /24 subnet per rack, one could use the format of ˜/192.168.100.0/192.168.100.5 as a unique rack-host topology mapping.
To use the java class for topology mapping, the class name is specified by the net.topology.node.switch.mapping.impl parameter in the configuration file. An example, NetworkTopology.java, is included with the hadoop distribution and can be customized by the Hadoop administrator. Using a Java class instead of an external script has a performance benefit in that Hadoop doesnt need to fork an external process when a new slave node registers itself.
If implementing an external script, it will be specified with the net.topology.script.file.name parameter in the configuration files. Unlike the java class, the external topology script is not included with the Hadoop distribution and is provided by the administrator. Hadoop will send multiple IP addresses to ARGV when forking the topology script. The number of IP addresses sent to the topology script is controlled with net.topology.script.number.args and defaults to 100. If net.topology.script.number.args was changed to 1, a topology script would get forked for each IP submitted by DataNodes and/or NodeManagers.
Question : You are working as a Chief Data Architect in a Retail Bank. And you are being asked to do following activities
- Monitor Each ATM transaction - Monitor Each online Transaction
Also, you need to create a personalized model for each customer, using existing customer data and also using Customer facebook data. And system should be able to learn by this and provide, highly targeted promotions. Which of the following system will help you to implement this 1. Apache Spark