Running hadoop on linux single node cluster
Résolu
Bonjour j'ai installé hadoop sous ubuntu 11.10 mais j'ai une erreur lors du lancement, voici le résultat :
hduser@velocity-pc:~$ /usr/local/hadoop/bin/start-all.sh
starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-namenode-velocity-pc.out
localhost: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-datanode-velocity-pc.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-secondarynamenode-velocity-pc.out
localhost: Exception in thread "main" java.lang.IllegalArgumentException
localhost: at java.net.URI.create(URI.java:842)
localhost: at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:103)
localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:115)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)
localhost: Caused by: java.net.URISyntaxException: Illegal character in scheme name at index 0: hdfs://localhost:54310
localhost: at java.net.URI$Parser.fail(URI.java:2809)
localhost: at java.net.URI$Parser.checkChars(URI.java:2982)
starting jobtracker, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-jobtracker-velocity-pc.out
localhost: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-tasktracker-velocity-pc.out
voici les fichiers de configurations:
#fichier mapre-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name> mapred.job.tracker</name>
<value> localhost:54311</value>
<description> The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.</description>
</property>
</configuration>
#fichier core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name> hadoop.tmp.dir</name>
<value> /app/hadoop/tmp</value>
<description> base for other temporary directories</description>
</property>
<property>
<name> fs.default.name</name>
<value> hdfs://localhost:54310</value>
<description> The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
#fichier hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name> dfs.replication</name>
<value> 1</value>
<description> Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.</description>
</property>
</configuration>
java -version
java version "1.6.0_25"
Java(TM) SE Runtime Environment (build 1.6.0_25-b06)
Java HotSpot(TM) Server VM (build 20.0-b11, mixed mode)
Pourriez vous m'aider ?
Merci d'avance .
hduser@velocity-pc:~$ /usr/local/hadoop/bin/start-all.sh
starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-namenode-velocity-pc.out
localhost: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-datanode-velocity-pc.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-secondarynamenode-velocity-pc.out
localhost: Exception in thread "main" java.lang.IllegalArgumentException
localhost: at java.net.URI.create(URI.java:842)
localhost: at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:103)
localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:115)
localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)
localhost: Caused by: java.net.URISyntaxException: Illegal character in scheme name at index 0: hdfs://localhost:54310
localhost: at java.net.URI$Parser.fail(URI.java:2809)
localhost: at java.net.URI$Parser.checkChars(URI.java:2982)
starting jobtracker, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-jobtracker-velocity-pc.out
localhost: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-tasktracker-velocity-pc.out
voici les fichiers de configurations:
#fichier mapre-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name> mapred.job.tracker</name>
<value> localhost:54311</value>
<description> The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.</description>
</property>
</configuration>
#fichier core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name> hadoop.tmp.dir</name>
<value> /app/hadoop/tmp</value>
<description> base for other temporary directories</description>
</property>
<property>
<name> fs.default.name</name>
<value> hdfs://localhost:54310</value>
<description> The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
#fichier hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name> dfs.replication</name>
<value> 1</value>
<description> Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.</description>
</property>
</configuration>
java -version
java version "1.6.0_25"
Java(TM) SE Runtime Environment (build 1.6.0_25-b06)
Java HotSpot(TM) Server VM (build 20.0-b11, mixed mode)
Pourriez vous m'aider ?
Merci d'avance .
A voir également:
- Running hadoop on linux single node cluster
- Linux reader - Télécharger - Stockage
- Toutou linux - Télécharger - Systèmes d'exploitation
- R-linux - Télécharger - Sauvegarde
- Backtrack linux - Télécharger - Sécurité
- Linux mint 22.1 - Accueil - Linux