java怎么拦截某个对象
302
2022-11-19
Hadoop 配置出错了,怎么办呢?
2019-07-18 06:30:36,609 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG: host = Hadoop/192.168.184.129STARTUP_MSG: args = []STARTUP_MSG: version = 2.7.2STARTUP_MSG: classpath = /opt/modules/hadoop-2.7.2/etc/hadoop:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/junit-4.11.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/hadoop-annotations-2.7.2.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/jettison-1.1.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/modules/hadoop-2.7.2/share/hadoop/common/lib/ build = Unknown -r Unknown; compiled by 'root' on 2017-05-22T10:49ZSTARTUP_MSG: java = 1.8.0_144************************************************************/2019-07-18 06:30:36,627 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]2019-07-18 06:30:36,632 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []2019-07-18 06:30:37,014 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties2019-07-18 06:30:37,146 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).2019-07-18 06:30:37,146 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started2019-07-18 06:30:37,149 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://mycluster2019-07-18 06:30:37,150 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use mycluster to access this namenode/service.2019-07-18 06:30:37,449 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: 06:30:37,515 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog2019-07-18 06:30:37,528 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.2019-07-18 06:30:37,540 INFO org.apache.hadoop.Http request log for is not defined2019-07-18 06:30:37,547 INFO org.apache.hadoop.Added global filter 'safety' (class=org.apache.hadoop.06:30:37,552 INFO org.apache.hadoop.Added filter static_user_filter (class=org.apache.hadoop.to context hdfs2019-07-18 06:30:37,552 INFO org.apache.hadoop.Added filter static_user_filter (class=org.apache.hadoop.to context logs2019-07-18 06:30:37,552 INFO org.apache.hadoop.Added filter static_user_filter (class=org.apache.hadoop.to context static2019-07-18 06:30:37,585 INFO org.apache.hadoop.Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)2019-07-18 06:30:37,587 INFO org.apache.hadoop.addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*2019-07-18 06:30:37,612 INFO org.apache.hadoop.Jetty bound to port 500702019-07-18 06:30:37,612 INFO org.mortbay.log: jetty-6.1.262019-07-18 06:30:37,833 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:500702019-07-18 06:30:37,891 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!2019-07-18 06:30:37,926 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.2019-07-18 06:30:37,926 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true2019-07-18 06:30:37,986 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=10002019-07-18 06:30:37,986 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true2019-07-18 06:30:37,990 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.0002019-07-18 06:30:37,991 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2019 Jul 18 06:30:372019-07-18 06:30:37,997 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap2019-07-18 06:30:37,997 INFO org.apache.hadoop.util.GSet: VM type = 64-bit2019-07-18 06:30:37,998 INFO org.apache.hadoop.util.GSet: 2.0% max memory 966.7 MB = 19.3 MB2019-07-18 06:30:37,998 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries2019-07-18 06:30:38,053 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false2019-07-18 06:30:38,053 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 32019-07-18 06:30:38,053 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 5122019-07-18 06:30:38,053 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 12019-07-18 06:30:38,053 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 22019-07-18 06:30:38,054 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 30002019-07-18 06:30:38,054 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false2019-07-18 06:30:38,054 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 10002019-07-18 06:30:38,061 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)2019-07-18 06:30:38,061 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup2019-07-18 06:30:38,066 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true2019-07-18 06:30:38,067 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Determined nameservice ID: mycluster2019-07-18 06:30:38,067 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: true2019-07-18 06:30:38,068 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true2019-07-18 06:30:38,475 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap2019-07-18 06:30:38,475 INFO org.apache.hadoop.util.GSet: VM type = 64-bit2019-07-18 06:30:38,478 INFO org.apache.hadoop.util.GSet: 1.0% max memory 966.7 MB = 9.7 MB2019-07-18 06:30:38,478 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries2019-07-18 06:30:38,480 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false2019-07-18 06:30:38,480 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true2019-07-18 06:30:38,480 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 163842019-07-18 06:30:38,480 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times2019-07-18 06:30:38,526 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks2019-07-18 06:30:38,526 INFO org.apache.hadoop.util.GSet: VM type = 64-bit2019-07-18 06:30:38,526 INFO org.apache.hadoop.util.GSet: 0.25% max memory 966.7 MB = 2.4 MB2019-07-18 06:30:38,526 INFO org.apache.hadoop.util.GSet: capacity = 2^18 = 262144 entries2019-07-18 06:30:38,528 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.99900001287460332019-07-18 06:30:38,528 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 02019-07-18 06:30:38,528 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 300002019-07-18 06:30:38,531 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 102019-07-18 06:30:38,531 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 102019-07-18 06:30:38,531 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,252019-07-18 06:30:38,536 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled2019-07-18 06:30:38,536 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis2019-07-18 06:30:38,537 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache2019-07-18 06:30:38,537 INFO org.apache.hadoop.util.GSet: VM type = 64-bit2019-07-18 06:30:38,537 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB2019-07-18 06:30:38,537 INFO org.apache.hadoop.util.GSet: capacity = 2^15 = 32768 entries2019-07-18 06:30:38,559 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /opt/modules/hadoop-2.7.2/data/tmp/dfs/name/in_use.lock acquired by nodename 14037@Hadoop2019-07-18 06:30:39,108 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.2019-07-18 06:30:39,125 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.2019-07-18 06:30:39,160 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.2019-07-18 06:30:39,160 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /opt/modules/hadoop-2.7.2/data/tmp/dfs/name/current/fsimage_00000000000000000002019-07-18 06:30:39,168 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=true, isRollingUpgrade=false)2019-07-18 06:30:39,172 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups2019-07-18 06:30:39,172 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 623 msecs2019-07-18 06:30:39,383 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to mycluster:80202019-07-18 06:30:39,385 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue2019-07-18 06:30:39,393 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state2019-07-18 06:30:39,400 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state2019-07-18 06:30:39,400 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state2019-07-18 06:30:39,411 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:500702019-07-18 06:30:39,411 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...2019-07-18 06:30:39,412 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.2019-07-18 06:30:39,413 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.2019-07-18 06:30:39,416 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.java.io.IOException: Failed on local exception: java.net.SocketException: Unresolved address; Host Details : local host is: "mycluster"; destination host is: (unknown):0; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:773) at org.apache.hadoop.ipc.Server.bind(Server.java:425) at org.apache.hadoop.ipc.Server$Listener.
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~