Efficiency of User with Batch Auditing by Verifying Multiple Auditing Tasks
Abstract
In such file systems, a file is partitioned into a number of file chunks allocated in distinct nodes so that Map Reduce tasks can performed in parallel over the nodes to make resource utilization effective and to improve the response time of the job. In large failure prone cloud environments files and nodes are dynamically created, replaced and added in the system due to which some of the nodes are over loaded while some others are under loaded. It leads to load imbalance in distributed file system. To overcome this load imbalance problem, a fully distributed Load rebalancing algorithm has been implemented, which is dynamic in nature does not consider the previous state or behaviour of the system (global knowledge) and it only depends on the present behaviour of the system and estimation of load, comparison of load, stability of different system, performance of system, interaction n between the nodes, nature of load to be transferred, selection of nodes and network traffic. The current Hadoop implementation assumes that computing nodes in a cluster are homogeneous in nature.
Keywords
Full Text:
PDFCopyright (c) 2017 Edupedia Publications Pvt Ltd
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
All published Articles are Open Access at https://journals.pen2print.org/index.php/ijr/
Paper submission: ijr@pen2print.org