A HDFSClient for Hadoop using the native JAVA API, a tutorial


I’d like to talk about doing some day to day administrative task on the Hadoop system. Although the hadoop fs <commands> can get you to do most of the things, its still worthwhile to explore the rich API in Java for Hadoop. This post is by no means complete, but can get you started well.

The most basic step is to create an object of this class.

HDFSClient client = new HDFSClient();

Of course, you need to import a bunch of stuff. But if you are using an IDE like Eclipse, you’ll follow along just fine just by importing these. This should word fine for the entire code.

import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.BlockLocation;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.DistributedFileSystem;
import org.apache.hadoop.hdfs.protocol.DatanodeInfo;

1. Copying from Local file system to HDFS.
Copies a local file onto HDFS. You do have the hadoop file system command to do the same.

hadoop fs -copyFromLocal <local fs> <hadoop fs>

I am not explaining much here as the comments are quite helpful. Of course, while importing the configuration files, make sure to point them to your hadoop systems location. For mine, it looks like this:

Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);

This is how the Java API looks like:

public void copyFromLocal (String source, String dest) throws IOException {

Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);
Path srcPath = new Path(source);

Path dstPath = new Path(dest);
// Check if the file already exists
if (!(fileSystem.exists(dstPath))) {
System.out.println("No such destination " + dstPath);
return;
}

// Get the filename out of the file path
String filename = source.substring(source.lastIndexOf('/') + 1, source.length());

try{
fileSystem.copyFromLocalFile(srcPath, dstPath);
System.out.println("File " + filename + "copied to " + dest);
}catch(Exception e){
System.err.println("Exception caught! :" + e);
System.exit(1);
}finally{
fileSystem.close();
}
}

2.Copying files from HDFS to the local file system.

The hadoop fs command is the following.

hadoop fs -copyToLocal <hadoop fs> <local fs>

Alternatively,

hadoop fs -copyToLocal
public void copyFromHdfs (String source, String dest) throws IOException {

Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);
Path srcPath = new Path(source);

Path dstPath = new Path(dest);
// Check if the file already exists
if (!(fileSystem.exists(dstPath))) {
System.out.println("No such destination " + dstPath);
return;
}

// Get the filename out of the file path
String filename = source.substring(source.lastIndexOf('/') + 1, source.length());

try{
fileSystem.copyToLocalFile(srcPath, dstPath)
System.out.println("File " + filename + "copied to " + dest);
}catch(Exception e){
System.err.println("Exception caught! :" + e);
System.exit(1);
}finally{
fileSystem.close();
}
}

3.Renaming a file in HDFS.

You can use the mv command in this context.

hadoop fs -mv <this name> <new name>
public void renameFile (String fromthis, String tothis) throws IOException{
Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);
Path fromPath = new Path(fromthis);
Path toPath = new Path(tothis);

if (!(fileSystem.exists(fromPath))) {
System.out.println("No such destination " + fromPath);
return;
}

if (fileSystem.exists(toPath)) {
System.out.println("Already exists! " + toPath);
return;
}

try{
boolean isRenamed = fileSystem.rename(fromPath, toPath);
if(isRenamed){
System.out.println("Renamed from " + fromthis + "to " + tothis);
}
}catch(Exception e){
System.out.println("Exception :" + e);
System.exit(1);
}finally{
fileSystem.close();
}

}

4.Upload or add a file to HDFS

public void addFile(String source, String dest) throws IOException {

// Conf object will read the HDFS configuration parameters
Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);

// Get the filename out of the file path
String filename = source.substring(source.lastIndexOf('/') + 1, source.length());

// Create the destination path including the filename.
if (dest.charAt(dest.length() - 1) != '/') {
dest = dest + "/" + filename;
} else {
dest = dest + filename;
}

// Check if the file already exists
Path path = new Path(dest);
if (fileSystem.exists(path)) {
System.out.println("File " + dest + " already exists");
return;
}

// Create a new file and write data to it.
FSDataOutputStream out = fileSystem.create(path);
InputStream in = new BufferedInputStream(new FileInputStream(
new File(source)));

byte[] b = new byte[1024];
int numBytes = 0;
while ((numBytes = in.read(b)) > 0) {
out.write(b, 0, numBytes);
}

// Close all the file descripters
in.close();
out.close();
fileSystem.close();
}

5.Delete a file from HDFS.

You can use the following:

For removing a directory or a file:

hadoop fs -rmr <hdfs path>

If you want to skip the trash also, use:

hadoop fs -rmr -skipTrash <hdfs path>

 

public void deleteFile(String file) throws IOException {
Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);

Path path = new Path(file);
if (!fileSystem.exists(path)) {
System.out.println("File " + file + " does not exists");
return;
}

fileSystem.delete(new Path(file), true);

fileSystem.close();
}

6.Get modification time of a file in HDFS.

If you have any idea on this let me know. :)

public void getModificationTime(String source) throws IOException{

Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);
Path srcPath = new Path(source);

// Check if the file already exists
if (!(fileSystem.exists(srcPath))) {
System.out.println("No such destination " + srcPath);
return;
}
// Get the filename out of the file path
String filename = source.substring(source.lastIndexOf('/') + 1, source.length());

FileStatus fileStatus = fileSystem.getFileStatus(srcPath);
long modificationTime = fileStatus.getModificationTime();

System.out.format("File %s; Modification time : %0.2f %n",filename,modificationTime);

}

7.Get the block locations of a file in HDFS.

public void getBlockLocations(String source) throws IOException{

Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);
Path srcPath = new Path(source);

// Check if the file already exists
if (!(ifExists(srcPath))) {
System.out.println("No such destination " + srcPath);
return;
}
// Get the filename out of the file path
String filename = source.substring(source.lastIndexOf('/') + 1, source.length());

FileStatus fileStatus = fileSystem.getFileStatus(srcPath);

BlockLocation[] blkLocations = fileSystem.getFileBlockLocations(fileStatus, 0, fileStatus.getLen());
int blkCount = blkLocations.length;

System.out.println("File :" + filename + "stored at:");
for (int i=0; i < blkCount; i++) {
String[] hosts = blkLocations[i].getHosts();
System.out.format("Host %d: %s %n", i, hosts);
}

}

8.List all the datanodes in terms of hostnames.
This is a neat way rather than looking up the /etc/hosts file in the namenode.

public void getHostnames () throws IOException{
Configuration config = new Configuration();
config.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
config.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
config.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fs = FileSystem.get(config);
DistributedFileSystem hdfs = (DistributedFileSystem) fs;
DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();

String[] names = new String[dataNodeStats.length];
for (int i = 0; i < dataNodeStats.length; i++) {
names[i] = dataNodeStats[i].getHostName();
System.out.println((dataNodeStats[i].getHostName()));
}
}

9.Create a new directory in HDFS.
Creating a directory will be done as:

hadoop fs -mkdir <hadoop fs path>

 

public void mkdir(String dir) throws IOException {
Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);

Path path = new Path(dir);
if (fileSystem.exists(path)) {
System.out.println("Dir " + dir + " already exists!");
return;
}

fileSystem.mkdirs(path);

fileSystem.close();
}

10. Read a file from HDFS

public void readFile(String file) throws IOException {
Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);

Path path = new Path(file);
if (!fileSystem.exists(path)) {
System.out.println("File " + file + " does not exists");
return;
}

FSDataInputStream in = fileSystem.open(path);

String filename = file.substring(file.lastIndexOf('/') + 1,
file.length());

OutputStream out = new BufferedOutputStream(new FileOutputStream(
new File(filename)));

byte[] b = new byte[1024];
int numBytes = 0;
while ((numBytes = in.read(b)) > 0) {
out.write(b, 0, numBytes);
}

in.close();
out.close();
fileSystem.close();
}

11.Checking if a file exists in HDFS

public boolean ifExists (Path source) throws IOException{

Configuration config = new Configuration();
config.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
config.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
config.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem hdfs = FileSystem.get(config);
boolean isExists = hdfs.exists(source);
return isExists;
}

I know this is no way complete. But this is a rather long post. I hope it is useful. Responses appreciated!
And here is the complete code for HDFSClient.java. Happy Hadooping! :)

/*
Feel free to use, copy and distribute this program in any form.
HDFSClient.java

http://linuxjunkies.wordpress.com/

2011
*/

import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.BlockLocation;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.DistributedFileSystem;
import org.apache.hadoop.hdfs.protocol.DatanodeInfo;

public class HDFSClient {

public HDFSClient() {

}

public static void printUsage(){
System.out.println("Usage: hdfsclient add" + "<local_path> <hdfs_path>");
System.out.println("Usage: hdfsclient read" + "<hdfs_path>");
System.out.println("Usage: hdfsclient delete" + "<hdfs_path>");
System.out.println("Usage: hdfsclient mkdir" + "<hdfs_path>");
System.out.println("Usage: hdfsclient copyfromlocal" + "<local_path> <hdfs_path>");
System.out.println("Usage: hdfsclient copytolocal" + " <hdfs_path> <local_path> ");
System.out.println("Usage: hdfsclient modificationtime" + "<hdfs_path>");
System.out.println("Usage: hdfsclient getblocklocations" + "<hdfs_path>");
System.out.println("Usage: hdfsclient gethostnames");
}

public boolean ifExists (Path source) throws IOException{

Configuration config = new Configuration();
config.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
config.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
config.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem hdfs = FileSystem.get(config);
boolean isExists = hdfs.exists(source);
return isExists;
}

public void getHostnames () throws IOException{
Configuration config = new Configuration();
config.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
config.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
config.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fs = FileSystem.get(config);
DistributedFileSystem hdfs = (DistributedFileSystem) fs;
DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();

String[] names = new String[dataNodeStats.length];
for (int i = 0; i < dataNodeStats.length; i++) {
names[i] = dataNodeStats[i].getHostName();
System.out.println((dataNodeStats[i].getHostName()));
}
}

public void getBlockLocations(String source) throws IOException{

Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);
Path srcPath = new Path(source);

// Check if the file already exists
if (!(ifExists(srcPath))) {
System.out.println("No such destination " + srcPath);
return;
}
// Get the filename out of the file path
String filename = source.substring(source.lastIndexOf('/') + 1, source.length());

FileStatus fileStatus = fileSystem.getFileStatus(srcPath);

BlockLocation[] blkLocations = fileSystem.getFileBlockLocations(fileStatus, 0, fileStatus.getLen());
int blkCount = blkLocations.length;

System.out.println("File :" + filename + "stored at:");
for (int i=0; i < blkCount; i++) {
String[] hosts = blkLocations[i].getHosts();
System.out.format("Host %d: %s %n", i, hosts);
}

}

public void getModificationTime(String source) throws IOException{

Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);
Path srcPath = new Path(source);

// Check if the file already exists
if (!(fileSystem.exists(srcPath))) {
System.out.println("No such destination " + srcPath);
return;
}
// Get the filename out of the file path
String filename = source.substring(source.lastIndexOf('/') + 1, source.length());

FileStatus fileStatus = fileSystem.getFileStatus(srcPath);
long modificationTime = fileStatus.getModificationTime();

System.out.format("File %s; Modification time : %0.2f %n",filename,modificationTime);

}

public void copyFromLocal (String source, String dest) throws IOException {

Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);
Path srcPath = new Path(source);

Path dstPath = new Path(dest);
// Check if the file already exists
if (!(fileSystem.exists(dstPath))) {
System.out.println("No such destination " + dstPath);
return;
}

// Get the filename out of the file path
String filename = source.substring(source.lastIndexOf('/') + 1, source.length());

try{
fileSystem.copyFromLocalFile(srcPath, dstPath);
System.out.println("File " + filename + "copied to " + dest);
}catch(Exception e){
System.err.println("Exception caught! :" + e);
System.exit(1);
}finally{
fileSystem.close();
}
}

public void copyToLocal (String source, String dest) throws IOException {

Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);
Path srcPath = new Path(source);

Path dstPath = new Path(dest);
// Check if the file already exists
if (!(fileSystem.exists(srcPath))) {
System.out.println("No such destination " + srcPath);
return;
}

// Get the filename out of the file path
String filename = source.substring(source.lastIndexOf('/') + 1, source.length());

try{
fileSystem.copyToLocalFile(srcPath, dstPath);
System.out.println("File " + filename + "copied to " + dest);
}catch(Exception e){
System.err.println("Exception caught! :" + e);
System.exit(1);
}finally{
fileSystem.close();
}
}

public void renameFile (String fromthis, String tothis) throws IOException{
Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);
Path fromPath = new Path(fromthis);
Path toPath = new Path(tothis);

if (!(fileSystem.exists(fromPath))) {
System.out.println("No such destination " + fromPath);
return;
}

if (fileSystem.exists(toPath)) {
System.out.println("Already exists! " + toPath);
return;
}

try{
boolean isRenamed = fileSystem.rename(fromPath, toPath);
if(isRenamed){
System.out.println("Renamed from " + fromthis + "to " + tothis);
}
}catch(Exception e){
System.out.println("Exception :" + e);
System.exit(1);
}finally{
fileSystem.close();
}

}

public void addFile(String source, String dest) throws IOException {

// Conf object will read the HDFS configuration parameters
Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);

// Get the filename out of the file path
String filename = source.substring(source.lastIndexOf('/') + 1, source.length());

// Create the destination path including the filename.
if (dest.charAt(dest.length() - 1) != '/') {
dest = dest + "/" + filename;
} else {
dest = dest + filename;
}

// Check if the file already exists
Path path = new Path(dest);
if (fileSystem.exists(path)) {
System.out.println("File " + dest + " already exists");
return;
}

// Create a new file and write data to it.
FSDataOutputStream out = fileSystem.create(path);
InputStream in = new BufferedInputStream(new FileInputStream(
new File(source)));

byte[] b = new byte[1024];
int numBytes = 0;
while ((numBytes = in.read(b)) > 0) {
out.write(b, 0, numBytes);
}

// Close all the file descripters
in.close();
out.close();
fileSystem.close();
}

public void readFile(String file) throws IOException {
Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);

Path path = new Path(file);
if (!fileSystem.exists(path)) {
System.out.println("File " + file + " does not exists");
return;
}

FSDataInputStream in = fileSystem.open(path);

String filename = file.substring(file.lastIndexOf('/') + 1,
file.length());

OutputStream out = new BufferedOutputStream(new FileOutputStream(
new File(filename)));

byte[] b = new byte[1024];
int numBytes = 0;
while ((numBytes = in.read(b)) > 0) {
out.write(b, 0, numBytes);
}

in.close();
out.close();
fileSystem.close();
}

public void deleteFile(String file) throws IOException {
Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);

Path path = new Path(file);
if (!fileSystem.exists(path)) {
System.out.println("File " + file + " does not exists");
return;
}

fileSystem.delete(new Path(file), true);

fileSystem.close();
}

public void mkdir(String dir) throws IOException {
Configuration conf = new Configuration();
conf.addResource(new Path("/home/hadoop/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/hdfs-site.xml"));
conf.addResource(new Path("/home/hadoop/hadoop/conf/mapred-site.xml"));

FileSystem fileSystem = FileSystem.get(conf);

Path path = new Path(dir);
if (fileSystem.exists(path)) {
System.out.println("Dir " + dir + " already exists!");
return;
}

fileSystem.mkdirs(path);

fileSystem.close();
}

public static void main(String[] args) throws IOException {

if (args.length < 1) {
printUsage();
System.exit(1);
}

HDFSClient client = new HDFSClient();
if (args[0].equals("add")) {
if (args.length < 3) {
System.out.println("Usage: hdfsclient add <local_path> " + "<hdfs_path>");
System.exit(1);
}
client.addFile(args[1], args[2]);

} else if (args[0].equals("read")) {
if (args.length < 2) {
System.out.println("Usage: hdfsclient read <hdfs_path>");
System.exit(1);
}
client.readFile(args[1]);

} else if (args[0].equals("delete")) {
if (args.length < 2) {
System.out.println("Usage: hdfsclient delete <hdfs_path>");
System.exit(1);
}

client.deleteFile(args[1]);
} else if (args[0].equals("mkdir")) {
if (args.length < 2) {
System.out.println("Usage: hdfsclient mkdir <hdfs_path>");
System.exit(1);
}

client.mkdir(args[1]);
}else if (args[0].equals("copyfromlocal")) {
if (args.length < 3) {
System.out.println("Usage: hdfsclient copyfromlocal <from_local_path> <to_hdfs_path>");
System.exit(1);
}

client.copyFromLocal(args[1], args[2]);
} else if (args[0].equals("rename")) {
if (args.length < 3) {
System.out.println("Usage: hdfsclient rename <old_hdfs_path> <new_hdfs_path>");
System.exit(1);
}

client.renameFile(args[1], args[2]);
}else if (args[0].equals("copytolocal")) {
if (args.length < 3) {
System.out.println("Usage: hdfsclient copytolocal <from_hdfs_path> <to_local_path>");
System.exit(1);
}

client.copyToLocal(args[1], args[2]);
}else if (args[0].equals("modificationtime")) {
if (args.length < 2) {
System.out.println("Usage: hdfsclient modificationtime <hdfs_path>");
System.exit(1);
}

client.getModificationTime(args[1]);
}else if (args[0].equals("getblocklocations")) {
if (args.length < 2) {
System.out.println("Usage: hdfsclient getblocklocations <hdfs_path>");
System.exit(1);
}

client.getBlockLocations(args[1]);
} else if (args[0].equals("gethostnames")) {

client.getHostnames();
}else {

printUsage();
System.exit(1);
}

System.out.println("Done!");
}
}
About these ads

13 comments

  1. I have the following Queries :

    1. Hadoop framework automatically split the file into blocks and distribute the blocks on cluster nodes. My Question is , Can i control the block to go to which cluster machine through java API ? That is , i want to assign a cluster machine to a file block based on some criteria ?

    2. Also , instead of specifying the input as the ‘file name’ stored on HDFS to MapReduce job , can i specify the block as an input to MapReduce job , that is , can i specify the input at block level instead of file level.

    please , somebody help me !!

    1. 1. Hadoop framework automatically split the file into blocks and distribute the blocks on cluster nodes. My Question is , Can i control the block to go to which cluster machine through java API ? That is , i want to assign a cluster machine to a file block based on some criteria ?

      I have heard of setting the replication level for an individual block on HDFS. And may I know the reason why you need to keep all the blocks of an individual file on one machine? That is possible with a replication factor of one. Besides, keeping everything on a single machine does not help in parallelism. Hope you already know how blocks are stored as per disk and then to the same rack and then perhaps another data center.

      I have not tried this Java API, but you should look at this org.apache.hadoop.fs.BlockLocation. In the constructor, you can specify the hosts. Hope this suites your need. And let me know if you find some other way to do it.

      2. Also , instead of specifying the input as the ‘file name’ stored on HDFS to MapReduce job , can i specify the block as an input to MapReduce job , that is , can i specify the input at block level instead of file level.

      Again, never read about it. Is this for some research work? Look into these in the documentation:
      org.apache.hadoop.fs.s3.Block
      You get the block id from this API, maybe you can put this to use.

      1. salmakhalil · · Reply

        Hi,
        Did you find away to answer your question: how blocks are stored as per disk and then to the same rack and then perhaps another data center.
        Thanks in Advance,
        Salma

  2. sandeep · · Reply

    Hi some of them are throwing errors like
    org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.security.AccessControlException: Access denied for user SReddy. Superuser privilege is required
    How to run them with user HDFS ??

    1. You should be running it as user who has access to HDFS or with appropriate permissions. This is a working code tested on hadoop 0.20.2.

  3. There is a correction in listing 2. It should be fileSystem.copyToLocalFile(srcPath, dstPath) and even the method name should be in accordance.

    1. Yes, thank you Puneet for pointing out. Will make a correction.

  4. Good way of describing, and nice post to obtain data
    about my presentation subject matter, which i am going to deliver in academy.

    1. Thank you. Good to know that it was helpful.

  5. rameshcharykotha · · Reply

    Thank you. Nice program.

  6. Nice Article. I am in US and like to talk to you. Please email me your contact info to someshmnda@gmail.com

  7. […] Actually the -copyFromLocal function inside the hdfs in Hadoop is normally written in Java program […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: