Quantcast

hicc start problem:"Unable to load dashboard"

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

hicc start problem:"Unable to load dashboard"

scott

when run bin/chukwa hicc, i can see the web interface,but it tells me "unable to load dashboard"

chukwa log file is as below, can anyone help?

2012-05-23 04:02:21,912 INFO main ChukwaConfiguration - chukwaConf is /home/hadoop/hadoop/chukwa/conf
2012-05-23 04:02:23,010 INFO main log - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2012-05-23 04:02:23,110 INFO main log - jetty-6.1.26
2012-05-23 04:02:25,645 INFO main log - Opened /home/hadoop/hadoop/chukwa/logs/2012_05_23.request.log
2012-05-23 04:02:25,687 INFO main log - Started SelectChannelConnector@0.0.0.0:4080
2012-05-23 04:02:34,368 INFO 597295774@qtp-1750442808-7 ZooKeeper - Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
2012-05-23 04:02:34,368 INFO 597295774@qtp-1750442808-7 ZooKeeper - Client environment:host.name=master
2012-05-23 04:02:34,368 INFO 597295774@qtp-1750442808-7 ZooKeeper - Client environment:java.version=1.6.0_31
2012-05-23 04:02:34,368 INFO 597295774@qtp-1750442808-7 ZooKeeper - Client environment:java.vendor=Sun Microsystems Inc.
2012-05-23 04:02:34,368 INFO 597295774@qtp-1750442808-7 ZooKeeper - Client environment:java.home=/usr/local/jdk1.6/jre
2012-05-23 04:02:34,368 INFO 597295774@qtp-1750442808-7 ZooKeeper - Client environment:java.class.path=/home/hadoop/hadoop/chukwa/conf:/home/hadoop/hadoop/hadoop-1.0.0/conf:.:/usr/local/jdk1.6/lib:/home/hadoop/hadoop/hbase-0.92.1/conf:/home/hadoop/hadoop/hadoop-1.0.0/conf:/home/hadoop/hadoop/hbase-0.92.1/conf:/home/hadoop/hadoop/hadoop-1.0.0/conf:/home/hadoop/hadoop/chukwa/share/chukwa/webapps/hicc.war::/home/hadoop/hadoop/chukwa/share/chukwa/chukwa-0.5.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/chukwa-0.5.0-client.jar:/home/hadoop/hadoop/chukwa/share/chukwa/demux.jar:/home/hadoop/hadoop/chukwa/share/chukwa/chukwa-0.5.0-tests.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jsr311-api-1.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/spring-core-3.0.3.RELEASE.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jackson-xc-1.5.5.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-logging-api-1.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/kahadb-5.5.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-digester-1.8.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jersey-json-1.4.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/hadoop-core-1.0.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/hbase-0.90.4-tests.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-configuration-1.7.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/servlet-api-2.5-20081211.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jdiff-1.0.9.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jettison-1.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/prefuse-beta-20071021.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/servlet-api-2.3.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/confspellcheck.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jchronic-0.2.3.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/spring-context-3.0.3.RELEASE.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-el-1.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/geronimo-jms_1.1_spec-1.1.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/prefuse.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/stax-api-1.0.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/aopalliance-1.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/kfs-0.3.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jsr311-api-1.1.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/spring-asm-3.0.3.RELEASE.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/activemq-protobuf-1.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/mina-core-2.0.0-M5.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-math-2.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/asm-3.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/libthrift-0.5.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jsp-2.1-6.1.14.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/log4j-1.2.16.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-collections-3.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/geronimo-j2ee-management_1.1_spec-1.0.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/avalon-framework-4.1.3.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/json-lib-2.2.3-jdk15.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/slf4j-api-1.5.11.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-beanutils-1.8.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-fileupload-1.2.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jets3t-0.7.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/spring-aop-3.0.3.RELEASE.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jersey-server-1.4.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jackson-mapper-asl-1.0.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/avro-1.3.3.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/ftpserver-deprecated-1.0.0-M2.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/html-filter-1.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jaxb-api-2.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jaxb-impl-2.1.12.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-io-1.4.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/hamcrest-core-1.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/xmlenc-0.52.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/activemq-core-5.5.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/protobuf-java-2.3.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jruby-complete-1.6.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/json-20090211.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/json-simple-1.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/xercesImpl-2.10.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/spring-beans-3.0.3.RELEASE.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/hbase-0.92.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/spring-mock-2.0.8.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/core-3.1.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jetty-6.1.26.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/xml-apis-1.4.01.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/guava-10.0.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-logging-1.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-lang-2.4.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/oro-2.0.8.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/activation-1.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jasypt-1.7.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/junit-4.10.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-codec-1.3.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/logkit-1.0.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/sigar.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/hadoop-test-1.0.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/json.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/zookeeper-3.4.3.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jackson-core-asl-1.5.5.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jersey-core-1.4.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/ftplet-api-1.0.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-cli-2.0-SNAPSHOT.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/spring-expression-3.0.3.RELEASE.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-net-1.4.1.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/org.osgi.core-4.1.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/ftpserver-core-1.0.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/thrift-0.2.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jline-0.9.94.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/commons-cli-1.2.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/NagiosAppender-1.5.0.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/activeio-core-3.1.2.jar:/home/hadoop/hadoop/chukwa/share/chukwa/lib/jersey-bundle-1.10.jar:
2012-05-23 04:02:34,368 INFO 597295774@qtp-1750442808-7 ZooKeeper - Client environment:java.library.path=/home/hadoop/hadoop/hadoop-1.0.0/lib/native/Linux-i386-32
2012-05-23 04:02:34,368 INFO 597295774@qtp-1750442808-7 ZooKeeper - Client environment:java.io.tmpdir=/tmp
2012-05-23 04:02:34,368 INFO 597295774@qtp-1750442808-7 ZooKeeper - Client environment:java.compiler=<NA>
2012-05-23 04:02:34,368 INFO 597295774@qtp-1750442808-7 ZooKeeper - Client environment:os.name=Linux
2012-05-23 04:02:34,368 INFO 597295774@qtp-1750442808-7 ZooKeeper - Client environment:os.arch=amd64
2012-05-23 04:02:34,368 INFO 597295774@qtp-1750442808-7 ZooKeeper - Client environment:os.version=3.0.0-12-generic
2012-05-23 04:02:34,368 INFO 597295774@qtp-1750442808-7 ZooKeeper - Client environment:user.name=hadoop
2012-05-23 04:02:34,368 INFO 597295774@qtp-1750442808-7 ZooKeeper - Client environment:user.home=/home/hadoop
2012-05-23 04:02:34,368 INFO 597295774@qtp-1750442808-7 ZooKeeper - Client environment:user.dir=/home/hadoop/hadoop/chukwa
2012-05-23 04:02:34,369 INFO 597295774@qtp-1750442808-7 ZooKeeper - Initiating client connection, connectString=master:2181 sessionTimeout=180000 watcher=hconnection
2012-05-23 04:02:34,407 INFO 597295774@qtp-1750442808-7-SendThread() ClientCnxn - Opening socket connection to server /10.197.64.73:2181
2012-05-23 04:02:34,409 INFO 597295774@qtp-1750442808-7 RecoverableZooKeeper - The identifier of this process is 26111@master
2012-05-23 04:02:34,426 WARN 597295774@qtp-1750442808-7-SendThread(master:2181) ZooKeeperSaslClient - SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-05-23 04:02:34,426 INFO 597295774@qtp-1750442808-7-SendThread(master:2181) ZooKeeperSaslClient - Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-05-23 04:02:34,428 INFO 597295774@qtp-1750442808-7-SendThread(master:2181) ClientCnxn - Socket connection established to master/10.197.64.73:2181, initiating session
2012-05-23 04:02:34,432 INFO 597295774@qtp-1750442808-7-SendThread(master:2181) ClientCnxn - Session establishment complete on server master/10.197.64.73:2181, sessionid = 0x1377403e5530016, negotiated timeout = 180000
2012-05-23 04:02:35,227 INFO 1900733121@qtp-1750442808-2 ChukwaConfiguration - chukwaConf is /home/hadoop/hadoop/chukwa/conf
2012-05-23 04:02:35,402 INFO 1900733121@qtp-1750442808-2 ChukwaConfiguration - chukwaConf is /home/hadoop/hadoop/chukwa/conf
2012-05-23 04:02:35,428 ERROR 1498770706@qtp-1750442808-4 log - Error for /hicc/v1/view/list
java.lang.IncompatibleClassChangeError: Class javax.ws.rs.core.Response$Status does not implement the requested interface javax.ws.rs.core.Response$StatusType
        at com.sun.jersey.spi.container.ContainerResponse.getStatus(ContainerResponse.java:548)
        at com.sun.jersey.spi.container.ContainerResponse$CommittingOutputStream.commitWrite(ContainerResponse.java:156)
        at com.sun.jersey.spi.container.ContainerResponse$CommittingOutputStream.write(ContainerResponse.java:133)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
        at java.io.BufferedWriter.flush(BufferedWriter.java:236)
        at com.sun.jersey.core.util.ReaderWriter.writeToAsString(ReaderWriter.java:191)
        at com.sun.jersey.core.provider.AbstractMessageReaderWriterProvider.writeToAsString(AbstractMessageReaderWriterProvider.java:128)
        at com.sun.jersey.core.impl.provider.entity.StringProvider.writeTo(StringProvider.java:88)
        at com.sun.jersey.core.impl.provider.entity.StringProvider.writeTo(StringProvider.java:58)
        at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:299)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1326)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1239)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1229)
        at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:420)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:497)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:684)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
        at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:326)
        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
        at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
2012-05-23 04:02:35,616 ERROR 1900733121@qtp-1750442808-2 WidgetBean - java.lang.NullPointerException
        at org.apache.hadoop.chukwa.rest.bean.WidgetBean.<init>(WidgetBean.java:55)
        at org.apache.hadoop.chukwa.datastore.WidgetStore.cacheWidgets(WidgetStore.java:98)
        at org.apache.hadoop.chukwa.datastore.WidgetStore.list(WidgetStore.java:121)
        at org.apache.hadoop.chukwa.rest.bean.WidgetBean.update(WidgetBean.java:158)
        at org.apache.hadoop.chukwa.rest.bean.ColumnBean.update(ColumnBean.java:63)
        at org.apache.hadoop.chukwa.rest.bean.PagesBean.update(PagesBean.java:83)
        at org.apache.hadoop.chukwa.rest.bean.ViewBean.update(ViewBean.java:127)
        at org.apache.hadoop.chukwa.datastore.ViewStore.load(ViewStore.java:92)
        at org.apache.hadoop.chukwa.datastore.ViewStore.<init>(ViewStore.java:61)
        at org.apache.hadoop.chukwa.rest.resource.ViewResource.getView(ViewResource.java:52)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:168)
        at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:70)
        at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:279)
        at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:136)
        at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:86)
        at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:136)
        at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:74)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1357)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1289)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1239)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1229)
        at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:420)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:497)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:684)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
        at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:326)
        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
        at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

2012-05-23 04:02:35,617 ERROR 1900733121@qtp-1750442808-2 WidgetStore - java.text.ParseException: java.lang.NullPointerException
        at org.apache.hadoop.chukwa.rest.bean.WidgetBean.<init>(WidgetBean.java:55)
        at org.apache.hadoop.chukwa.datastore.WidgetStore.cacheWidgets(WidgetStore.java:98)
        at org.apache.hadoop.chukwa.datastore.WidgetStore.list(WidgetStore.java:121)
        at org.apache.hadoop.chukwa.rest.bean.WidgetBean.update(WidgetBean.java:158)
        at org.apache.hadoop.chukwa.rest.bean.ColumnBean.update(ColumnBean.java:63)
        at org.apache.hadoop.chukwa.rest.bean.PagesBean.update(PagesBean.java:83)
        at org.apache.hadoop.chukwa.rest.bean.ViewBean.update(ViewBean.java:127)
        at org.apache.hadoop.chukwa.datastore.ViewStore.load(ViewStore.java:92)
        at org.apache.hadoop.chukwa.datastore.ViewStore.<init>(ViewStore.java:61)
        at org.apache.hadoop.chukwa.rest.resource.ViewResource.getView(ViewResource.java:52)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:168)
        at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:70)
        at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:279)
        at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:136)
        at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:86)
        at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:136)
        at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:74)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1357)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1289)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1239)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1229)
        at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:420)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:497)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:684)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
        at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:326)
        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
        at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

        at org.apache.hadoop.chukwa.rest.bean.WidgetBean.<init>(WidgetBean.java:80)
        at org.apache.hadoop.chukwa.datastore.WidgetStore.cacheWidgets(WidgetStore.java:98)
        at org.apache.hadoop.chukwa.datastore.WidgetStore.list(WidgetStore.java:121)
        at org.apache.hadoop.chukwa.rest.bean.WidgetBean.update(WidgetBean.java:158)
        at org.apache.hadoop.chukwa.rest.bean.ColumnBean.update(ColumnBean.java:63)
        at org.apache.hadoop.chukwa.rest.bean.PagesBean.update(PagesBean.java:83)
        at org.apache.hadoop.chukwa.rest.bean.ViewBean.update(ViewBean.java:127)
        at org.apache.hadoop.chukwa.datastore.ViewStore.load(ViewStore.java:92)
        at org.apache.hadoop.chukwa.datastore.ViewStore.<init>(ViewStore.java:61)
        at org.apache.hadoop.chukwa.rest.resource.ViewResource.getView(ViewResource.java:52)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:168)
        at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:70)
        at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:279)
        at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:136)
        at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:86)
        at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:136)
        at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:74)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1357)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1289)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1239)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1229)
        at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:420)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:497)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:684)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
        at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:326)
        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
        at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

厚积薄发

System.out.println("hello world again!")

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: hicc start problem:"Unable to load dashboard"

scott
previous problem with "unable load dashboard" is because of jersy, jsr package conflict,  jersy package include jsr's package content.... i just moved jsr packages out of classpath, and it works~ the web interface is displayed...but still has some problem to show "SystemMetrics" widget, no data is displayed in "SystemMetrics" widget (all time ranges are tried)...

and I am sure HBase table "SystemMetrics" exisits and has data, can anyone help?

here is  the log file:

2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client environment:java.io.tmpdir=/tmp
2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client environment:java.compiler=<NA>
2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client environment:os.name=Linux
2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client environment:os.arch=amd64
2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client environment:os.version=3.0.0-12-generic
2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client environment:user.name=hadoop
2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client environment:user.home=/home/hadoop
2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client environment:user.dir=/home/hadoop/hadoop/chukwa
2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Initiating client connection, connectString=master:2181 sessionTimeout=180000 watcher=hconnection
2012-05-23 21:44:26,523 INFO 1498770706@qtp-1750442808-4-SendThread() ClientCnxn - Opening socket connection to server /10.197.64.73:2181
2012-05-23 21:44:26,524 INFO 1498770706@qtp-1750442808-4 RecoverableZooKeeper - The identifier of this process is 31528@master
2012-05-23 21:44:26,536 WARN 1498770706@qtp-1750442808-4-SendThread(master:2181) ZooKeeperSaslClient - SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2012-05-23 21:44:26,536 INFO 1498770706@qtp-1750442808-4-SendThread(master:2181) ZooKeeperSaslClient - Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2012-05-23 21:44:26,537 INFO 1498770706@qtp-1750442808-4-SendThread(master:2181) ClientCnxn - Socket connection established to master/10.197.64.73:2181, initiating session
2012-05-23 21:44:26,631 INFO 1498770706@qtp-1750442808-4-SendThread(master:2181) ClientCnxn - Session establishment complete on server master/10.197.64.73:2181, sessionid = 0x1377403e5530025, negotiated timeout = 180000
2012-05-23 21:44:27,656 INFO 597295774@qtp-1750442808-7 ChukwaConfiguration - chukwaConf is /home/hadoop/hadoop/chukwa/conf
2012-05-23 21:44:27,831 INFO 597295774@qtp-1750442808-7 ChukwaConfiguration - chukwaConf is /home/hadoop/hadoop/chukwa/conf
2012-05-23 21:44:27,970 ERROR 597295774@qtp-1750442808-7 WidgetBean - java.lang.NullPointerException
        at org.apache.hadoop.chukwa.rest.bean.WidgetBean.<init>(WidgetBean.java:55)
        at org.apache.hadoop.chukwa.datastore.WidgetStore.cacheWidgets(WidgetStore.java:98)
        at org.apache.hadoop.chukwa.datastore.WidgetStore.list(WidgetStore.java:121)
        at org.apache.hadoop.chukwa.rest.bean.WidgetBean.update(WidgetBean.java:158)
        at org.apache.hadoop.chukwa.rest.bean.ColumnBean.update(ColumnBean.java:63)
        at org.apache.hadoop.chukwa.rest.bean.PagesBean.update(PagesBean.java:83)
        at org.apache.hadoop.chukwa.rest.bean.ViewBean.update(ViewBean.java:127)
        at org.apache.hadoop.chukwa.datastore.ViewStore.load(ViewStore.java:92)
        at org.apache.hadoop.chukwa.datastore.ViewStore.<init>(ViewStore.java:61)
        at org.apache.hadoop.chukwa.rest.resource.ViewResource.getView(ViewResource.java:52)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:168)
        at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:70)
        at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:279)
        at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:136)
        at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:86)
        at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:136)
        at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:74)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1357)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1289)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1239)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1229)
        at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:420)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:497)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:684)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
        at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:326)
        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
        at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

2012-05-23 21:44:27,970 ERROR 597295774@qtp-1750442808-7 WidgetStore - java.text.ParseException: java.lang.NullPointerException
        at org.apache.hadoop.chukwa.rest.bean.WidgetBean.<init>(WidgetBean.java:55)






厚积薄发

System.out.println("hello world again!")

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: hicc start problem:"Unable to load dashboard"

Eric Yang-3
Hi Scott,

You might need to use Host Selection widget to select a host for
System Metrics to show up.  For Cluster System Metrics, you need to
run pig ClusterSummary.pig to get aggregation going.  Hope this helps.

regards,
Eric

On Thu, May 24, 2012 at 9:03 PM, scott <[hidden email]> wrote:

> previous problem with "unable load dashboard" is because of jersy, jsr
> package conflict,  jersy package include jsr's package content.... i just
> moved jsr packages out of classpath, and it works~ the web interface is
> displayed...but still has some problem to show "SystemMetrics" widget, no
> data is displayed in "SystemMetrics" widget (all time ranges are tried)...
>
> and I am sure HBase table "SystemMetrics" exisits and has data, can anyone
> help?
>
> here is  the log file:
>
> 2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client
> environment:java.io.tmpdir=/tmp
> 2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client
> environment:java.compiler=<NA>
> 2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client
> environment:os.name=Linux
> 2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client
> environment:os.arch=amd64
> 2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client
> environment:os.version=3.0.0-12-generic
> 2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client
> environment:user.name=hadoop
> 2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client
> environment:user.home=/home/hadoop
> 2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper - Client
> environment:user.dir=/home/hadoop/hadoop/chukwa
> 2012-05-23 21:44:26,473 INFO 1498770706@qtp-1750442808-4 ZooKeeper -
> Initiating client connection, connectString=master:2181
> sessionTimeout=180000 watcher=hconnection
> 2012-05-23 21:44:26,523 INFO 1498770706@qtp-1750442808-4-SendThread()
> ClientCnxn - Opening socket connection to server /10.197.64.73:2181
> 2012-05-23 21:44:26,524 INFO 1498770706@qtp-1750442808-4
> RecoverableZooKeeper - The identifier of this process is 31528@master
> 2012-05-23 21:44:26,536 WARN
> 1498770706@qtp-1750442808-4-SendThread(master:2181) ZooKeeperSaslClient -
> SecurityException: java.lang.SecurityException: Unable to locate a login
> configuration occurred when trying to find JAAS configuration.
> 2012-05-23 21:44:26,536 INFO
> 1498770706@qtp-1750442808-4-SendThread(master:2181) ZooKeeperSaslClient -
> Client will not SASL-authenticate because the default JAAS configuration
> section 'Client' could not be found. If you are not using SASL, you may
> ignore this. On the other hand, if you expected SASL to work, please fix
> your JAAS configuration.
> 2012-05-23 21:44:26,537 INFO
> 1498770706@qtp-1750442808-4-SendThread(master:2181) ClientCnxn - Socket
> connection established to master/10.197.64.73:2181, initiating session
> 2012-05-23 21:44:26,631 INFO
> 1498770706@qtp-1750442808-4-SendThread(master:2181) ClientCnxn - Session
> establishment complete on server master/10.197.64.73:2181, sessionid =
> 0x1377403e5530025, negotiated timeout = 180000
> 2012-05-23 21:44:27,656 INFO 597295774@qtp-1750442808-7 ChukwaConfiguration
> - chukwaConf is /home/hadoop/hadoop/chukwa/conf
> 2012-05-23 21:44:27,831 INFO 597295774@qtp-1750442808-7 ChukwaConfiguration
> - chukwaConf is /home/hadoop/hadoop/chukwa/conf
> 2012-05-23 21:44:27,970 ERROR 597295774@qtp-1750442808-7 WidgetBean -
> java.lang.NullPointerException
>        at
> org.apache.hadoop.chukwa.rest.bean.WidgetBean.<init>(WidgetBean.java:55)
>        at
> org.apache.hadoop.chukwa.datastore.WidgetStore.cacheWidgets(WidgetStore.java:98)
>        at
> org.apache.hadoop.chukwa.datastore.WidgetStore.list(WidgetStore.java:121)
>        at
> org.apache.hadoop.chukwa.rest.bean.WidgetBean.update(WidgetBean.java:158)
>        at
> org.apache.hadoop.chukwa.rest.bean.ColumnBean.update(ColumnBean.java:63)
>        at
> org.apache.hadoop.chukwa.rest.bean.PagesBean.update(PagesBean.java:83)
>        at
> org.apache.hadoop.chukwa.rest.bean.ViewBean.update(ViewBean.java:127)
>        at
> org.apache.hadoop.chukwa.datastore.ViewStore.load(ViewStore.java:92)
>        at
> org.apache.hadoop.chukwa.datastore.ViewStore.<init>(ViewStore.java:61)
>        at
> org.apache.hadoop.chukwa.rest.resource.ViewResource.getView(ViewResource.java:52)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:168)
>        at
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:70)
>        at
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:279)
>        at
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:136)
>        at
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:86)
>        at
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:136)
>        at
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:74)
>        at
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1357)
>        at
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1289)
>        at
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1239)
>        at
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1229)
>        at
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:420)
>        at
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:497)
>        at
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:684)
>        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>        at
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>        at
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
>        at
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>        at
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>        at
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>        at
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>        at
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>        at
> org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
>        at
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>        at org.mortbay.jetty.Server.handle(Server.java:326)
>        at
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>        at
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>        at
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>        at
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
>
> 2012-05-23 21:44:27,970 ERROR 597295774@qtp-1750442808-7 WidgetStore -
> java.text.ParseException: java.lang.NullPointerException
>        at
> org.apache.hadoop.chukwa.rest.bean.WidgetBean.<init>(WidgetBean.java:55)
>
>
>
>
>
>
>
>
> --
> View this message in context: http://apache-chukwa.679492.n3.nabble.com/hicc-start-problem-Unable-to-load-dashboard-tp4013192p4014627.html
> Sent from the Chukwa - Users mailing list archive at Nabble.com.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: hicc start problem:"Unable to load dashboard"

scott
Hi  Eric,

Thanks for your reply, now the web interface work fine ( also for my usage, i monitor other logs, not only Hadoop cluster)

Then, i came to store the data into HDFS, and run Demux and PostPocessor to divide the logs into datatype/datetime categories,
Here i found some running exceptions as belows: it says jdbc.conf is not found. 
Does it mean in Demux and PostProcess, the results will be written into databases?  can you give some hints on what will be written for any reasons?


Thanks~


Best Regards!
Scott 


Error logs sends to the console , not in Demux, postProcessor log file
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

hadoop@master:~/hadoop/chukwa$ tailjava.io.FileNotFoundException: /home/hadoop/hadoop/chukwa/conf/jdbc.conf (No such file or directory)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.<init>(FileInputStream.java:120)
        at java.io.FileReader.<init>(FileReader.java:55)
        at org.apache.hadoop.chukwa.util.ClusterConfig.getContents(ClusterConfig.java:36)
        at org.apache.hadoop.chukwa.util.ClusterConfig.<init>(ClusterConfig.java:60)
        at org.apache.hadoop.chukwa.dataloader.MetricDataLoader.initEnv(MetricDataLoader.java:96)
        at org.apache.hadoop.chukwa.dataloader.MetricDataLoader.run(MetricDataLoader.java:202)
        at org.apache.hadoop.chukwa.dataloader.MetricDataLoader.call(MetricDataLoader.java:577)
        at org.apache.hadoop.chukwa.dataloader.MetricDataLoader.call(MetricDataLoader.java:48)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
java.io.FileNotFoundException: /home/hadoop/hadoop/chukwa/conf/jdbc.conf (No such file or directory)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.<init>(FileInputStream.java:120)
        at java.io.FileReader.<init>(FileReader.java:55)
        at org.apache.hadoop.chukwa.util.ClusterConfig.getContents(ClusterConfig.java:36)
        at org.apache.hadoop.chukwa.util.ClusterConfig.<init>(ClusterConfig.java:60)
        at org.apache.hadoop.chukwa.dataloader.MetricDataLoader.initEnv(MetricDataLoader.java:96)
        at org.apache.hadoop.chukwa.dataloader.MetricDataLoader.run(MetricDataLoader.java:202)
        at org.apache.hadoop.chukwa.dataloader.MetricDataLoader.call(MetricDataLoader.java:577)
        at org.apache.hadoop.chukwa.dataloader.MetricDataLoader.call(MetricDataLoader.java:48)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)



2012/6/8 Eric Yang-3 [via Apache Chukwa] <[hidden email]>
Hi Scott,

You might need to use Host Selection widget to select a host for
System Metrics to show up.  For Cluster System Metrics, you need to
run pig ClusterSummary.pig to get aggregation going.  Hope this helps.

regards,
Eric

厚积薄发

System.out.println("hello world again!")

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: hicc start problem:"Unable to load dashboard"

Eric Yang-3
Hi Scott,

Demux with HDFS and PostProcessor store data to mysql.  This is the
old design before HBase came into existence.  The idea was to cleanse
the data and extract metrics to store in database for hicc to query.
PostProcessor was legacy code and it should not be used.

regards,
Eric

On Thu, Jun 14, 2012 at 3:26 AM, scott <[hidden email]> wrote:

> Hi  Eric,
>
> Thanks for your reply, now the web interface work fine ( also for my usage,
> i monitor other logs, not only Hadoop cluster)
>
> Then, i came to store the data into HDFS, and run Demux and PostPocessor to
> divide the logs into datatype/datetime categories,
> Here i found some running exceptions as belows: it says jdbc.conf is not
> found.
> Does it mean in Demux and PostProcess, the results will be written into
> databases?  can you give some hints on what will be written for any reasons?
>
>
> Thanks~
>
>
> Best Regards!
> Scott
>
>
> Error logs sends to the console , not in Demux, postProcessor log file
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> hadoop@master:~/hadoop/chukwa$ tailjava.io.FileNotFoundException:
> /home/hadoop/hadoop/chukwa/conf/jdbc.conf (No such file or directory)
>         at java.io.FileInputStream.open(Native Method)
>         at java.io.FileInputStream.<init>(FileInputStream.java:120)
>         at java.io.FileReader.<init>(FileReader.java:55)
>         at
> org.apache.hadoop.chukwa.util.ClusterConfig.getContents(ClusterConfig.java:36)
>         at
> org.apache.hadoop.chukwa.util.ClusterConfig.<init>(ClusterConfig.java:60)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.initEnv(MetricDataLoader.java:96)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.run(MetricDataLoader.java:202)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.call(MetricDataLoader.java:577)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.call(MetricDataLoader.java:48)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> java.io.FileNotFoundException: /home/hadoop/hadoop/chukwa/conf/jdbc.conf (No
> such file or directory)
>         at java.io.FileInputStream.open(Native Method)
>         at java.io.FileInputStream.<init>(FileInputStream.java:120)
>         at java.io.FileReader.<init>(FileReader.java:55)
>         at
> org.apache.hadoop.chukwa.util.ClusterConfig.getContents(ClusterConfig.java:36)
>         at
> org.apache.hadoop.chukwa.util.ClusterConfig.<init>(ClusterConfig.java:60)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.initEnv(MetricDataLoader.java:96)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.run(MetricDataLoader.java:202)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.call(MetricDataLoader.java:577)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.call(MetricDataLoader.java:48)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
>
>
>
> 2012/6/8 Eric Yang-3 [via Apache Chukwa] <[hidden email]>
>
>> Hi Scott,
>>
>> You might need to use Host Selection widget to select a host for
>> System Metrics to show up.  For Cluster System Metrics, you need to
>> run pig ClusterSummary.pig to get aggregation going.  Hope this helps.
>>
>> regards,
>> Eric
>>
> 厚积薄发
>
> System.out.println("hello world again!")
>
>
> ________________________________
> View this message in context: Re: hicc start problem:"Unable to load
> dashboard"
>
> Sent from the Chukwa - Users mailing list archive at Nabble.com.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: hicc start problem:"Unable to load dashboard"

scott
Thanks,Eric.

I still have some doubts about the process of writing logs to hbase and hdfs.

1. In my project, i need to collect and record some metrics for near real-time monitoring, and also store some logs for later analysis. For monitoring, hbase can be used, and for later log analysis, logs should be stored into hdfs.  Chukwa has provided writting to both hbase and hdfs. which can be set in chukwa-collector-conf.xml  using chukwaCollector.pipeline

<name>chukwaCollector.pipeline</name>
<value>org.apache.hadoop.chukwa.datacollection.writer.hbase.HBaseWriter,org.apache.hadoop.chukwa.datacollection.writer.SeqFileWriter</value>
....
However, in chukwaCollector.pipeline, we must write hbase.HBaseWriter ahead of writer.SeqFileWriter,  for in SeqFileWriter source code, i found that chunks will not pass to next writer. please verify that.

2. For HDFS, I want to store the data categorized by  [dataType]/[yyyyMMdd]/[HH]/[mm]/.  in your last letter, you said that it's old design to have PostProcessor to extract metrics to store in DB and it should not be used now, then, what will we do to achieve aggregating the data into such category. Is there any new code to check in to solve it ?


Regards
Scott




2012/6/15 Eric Yang-3 [via Apache Chukwa] <[hidden email]>
Hi Scott,

Demux with HDFS and PostProcessor store data to mysql.  This is the
old design before HBase came into existence.  The idea was to cleanse
the data and extract metrics to store in database for hicc to query.
PostProcessor was legacy code and it should not be used.

regards,
Eric

On Thu, Jun 14, 2012 at 3:26 AM, scott <[hidden email]> wrote:

> Hi  Eric,
>
> Thanks for your reply, now the web interface work fine ( also for my usage,
> i monitor other logs, not only Hadoop cluster)
>
> Then, i came to store the data into HDFS, and run Demux and PostPocessor to
> divide the logs into datatype/datetime categories,
> Here i found some running exceptions as belows: it says jdbc.conf is not
> found.
> Does it mean in Demux and PostProcess, the results will be written into
> databases?  can you give some hints on what will be written for any reasons?
>
>
> Thanks~
>
>
> Best Regards!
> Scott
>
>
> Error logs sends to the console , not in Demux, postProcessor log file
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> hadoop@master:~/hadoop/chukwa$ tailjava.io.FileNotFoundException:
> /home/hadoop/hadoop/chukwa/conf/jdbc.conf (No such file or directory)
>         at java.io.FileInputStream.open(Native Method)
>         at java.io.FileInputStream.<init>(FileInputStream.java:120)
>         at java.io.FileReader.<init>(FileReader.java:55)
>         at
> org.apache.hadoop.chukwa.util.ClusterConfig.getContents(ClusterConfig.java:36)
>         at
> org.apache.hadoop.chukwa.util.ClusterConfig.<init>(ClusterConfig.java:60)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.initEnv(MetricDataLoader.java:96)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.run(MetricDataLoader.java:202)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.call(MetricDataLoader.java:577)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.call(MetricDataLoader.java:48)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> java.io.FileNotFoundException: /home/hadoop/hadoop/chukwa/conf/jdbc.conf (No
> such file or directory)
>         at java.io.FileInputStream.open(Native Method)
>         at java.io.FileInputStream.<init>(FileInputStream.java:120)
>         at java.io.FileReader.<init>(FileReader.java:55)
>         at
> org.apache.hadoop.chukwa.util.ClusterConfig.getContents(ClusterConfig.java:36)
>         at
> org.apache.hadoop.chukwa.util.ClusterConfig.<init>(ClusterConfig.java:60)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.initEnv(MetricDataLoader.java:96)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.run(MetricDataLoader.java:202)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.call(MetricDataLoader.java:577)
>         at
> org.apache.hadoop.chukwa.dataloader.MetricDataLoader.call(MetricDataLoader.java:48)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
>
>
>
> 2012/6/8 Eric Yang-3 [via Apache Chukwa] <[hidden email]>
>
>> Hi Scott,
>>
>> You might need to use Host Selection widget to select a host for
>> System Metrics to show up.  For Cluster System Metrics, you need to
>> run pig ClusterSummary.pig to get aggregation going.  Hope this helps.
>>
>> regards,
>> Eric
>>
> 厚积薄发
>
> System.out.println("hello world again!")
>
>
> ________________________________
> View this message in context: Re: hicc start problem:"Unable to load
> dashboard"
>
> Sent from the Chukwa - Users mailing list archive at Nabble.com.



If you reply to this email, your message will be added to the discussion below:
http://apache-chukwa.679492.n3.nabble.com/hicc-start-problem-Unable-to-load-dashboard-tp4013192p4025017.html
To unsubscribe from hicc start problem:"Unable to load dashboard", click here.
NAML

厚积薄发

System.out.println("hello world again!")

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: hicc start problem:"Unable to load dashboard"

Eric Yang-3
On Mon, Jun 18, 2012 at 7:37 PM, scott <[hidden email]> wrote:

> Thanks,Eric.
>
> I still have some doubts about the process of writing logs to hbase and
> hdfs.
>
> 1. In my project, i need to collect and record some metrics for near
> real-time monitoring, and also store some logs for later analysis. For
> monitoring, hbase can be used, and for later log analysis, logs should be
> stored into hdfs.  Chukwa has provided writting to both hbase and hdfs.
> which can be set in
> chukwa-collector-conf.xml  using chukwaCollector.pipeline
>
> <name>chukwaCollector.pipeline</name>
> <value>org.apache.hadoop.chukwa.datacollection.writer.hbase.HBaseWriter,org.apache.hadoop.chukwa.datacollection.writer.SeqFileWriter</value>
> ....
> However, in chukwaCollector.pipeline, we must write hbase.HBaseWriter ahead
> of writer.SeqFileWriter,  for in SeqFileWriter source code, i found
> that chunks will not pass to next writer. please verify that.

SeqFileWriter does not pass to next writer, this is the reason that it
has to be the last writer in the pipeline as a workaround.

> 2. For HDFS, I want to store the data categorized by
> [dataType]/[yyyyMMdd]/[HH]/[mm]/.  in your last letter, you said that it's
> old design to have PostProcessor to extract metrics to store in DB and it
> should not be used now, then, what will we do to achieve aggregating the
> data into such category. Is there any new code to check in to solve it ?

There is no new code written for inject MR data to DB or HBase,
patches are welcome.

regards,
Eric
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: hicc start problem:"Unable to load dashboard"

scott
Hi, Eric

could you please give me some suggestions on continuously tailing a file if any rotates happened?

In our project, the log mechasim is as follows: 
when the file (eg file A )is over the size, it will copy all data to another backup file ( which always with file name ***bak00) ,  then the file A will truncate and resetted.

it seems that there are some code in CHUKWA to detect the rotation, and some solution on that. Could you plz give some detailed on that, and any advice in consideration of our log mechanism, to make little revision to fully collect the log data?

Thanks!

Regards,
Scott Huan






2012/6/19 Eric Yang-3 [via Apache Chukwa] <[hidden email]>
On Mon, Jun 18, 2012 at 7:37 PM, scott <[hidden email]> wrote:

> Thanks,Eric.
>
> I still have some doubts about the process of writing logs to hbase and
> hdfs.
>
> 1. In my project, i need to collect and record some metrics for near
> real-time monitoring, and also store some logs for later analysis. For
> monitoring, hbase can be used, and for later log analysis, logs should be
> stored into hdfs.  Chukwa has provided writting to both hbase and hdfs.
> which can be set in
> chukwa-collector-conf.xml  using chukwaCollector.pipeline
>
> <name>chukwaCollector.pipeline</name>
> <value>org.apache.hadoop.chukwa.datacollection.writer.hbase.HBaseWriter,org.apache.hadoop.chukwa.datacollection.writer.SeqFileWriter</value>
> ....
> However, in chukwaCollector.pipeline, we must write hbase.HBaseWriter ahead
> of writer.SeqFileWriter,  for in SeqFileWriter source code, i found
> that chunks will not pass to next writer. please verify that.
SeqFileWriter does not pass to next writer, this is the reason that it
has to be the last writer in the pipeline as a workaround.

> 2. For HDFS, I want to store the data categorized by
> [dataType]/[yyyyMMdd]/[HH]/[mm]/.  in your last letter, you said that it's
> old design to have PostProcessor to extract metrics to store in DB and it
> should not be used now, then, what will we do to achieve aggregating the
> data into such category. Is there any new code to check in to solve it ?

There is no new code written for inject MR data to DB or HBase,
patches are welcome.

regards,
Eric



If you reply to this email, your message will be added to the discussion below:
http://apache-chukwa.679492.n3.nabble.com/hicc-start-problem-Unable-to-load-dashboard-tp4013192p4025023.html
To unsubscribe from hicc start problem:"Unable to load dashboard", click here.
NAML

厚积薄发

System.out.println("hello world again!")

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: hicc start problem:"Unable to load dashboard"

Eric Yang-3
Hi Scott,

File tailing adaptors are using two file pointers to track the current
offset of the file.  First, file pointer is a persistent tracking, and
second file pointer is periodically check for end of file offset.
When the second file pointer offset is smaller than first file
pointer, occurrence of file rotation is detected.  LastModifiedTime
should also be compared between two file pointers to cover the case
where log file of previous day is 0 bytes and sudden increase of log
file for the next day.

You probably should do a rename or hard link for file A to **bak00,
then remove file A.  Let the file A recreate from scratch.  This will
save a lot of time in log file rotation.
Hope this helps.

regards,
Eric

On Thu, Jun 28, 2012 at 3:19 AM, scott <[hidden email]> wrote:

> Hi, Eric
>
> could you please give me some suggestions on continuously tailing a file if
> any rotates happened?
>
> In our project, the log mechasim is as follows:
> when the file (eg file A )is over the size, it will copy all data to another
> backup file ( which always with file name ***bak00) ,  then the file A will
> truncate and resetted.
>
> it seems that there are some code in CHUKWA to detect the rotation, and some
> solution on that. Could you plz give some detailed on that, and any advice
> in consideration of our log mechanism, to make little revision to fully
> collect the log data?
>
> Thanks!
>
> Regards,
> Scott Huan
>
>
>
>
>
>
> 2012/6/19 Eric Yang-3 [via Apache Chukwa] <[hidden email]>
>>
>> On Mon, Jun 18, 2012 at 7:37 PM, scott <[hidden email]> wrote:
>>
>> > Thanks,Eric.
>> >
>> > I still have some doubts about the process of writing logs to hbase and
>> > hdfs.
>> >
>> > 1. In my project, i need to collect and record some metrics for near
>> > real-time monitoring, and also store some logs for later analysis. For
>> > monitoring, hbase can be used, and for later log analysis, logs should
>> > be
>> > stored into hdfs.  Chukwa has provided writting to both hbase and hdfs.
>> > which can be set in
>> > chukwa-collector-conf.xml  using chukwaCollector.pipeline
>> >
>> > <name>chukwaCollector.pipeline</name>
>> >
>> > <value>org.apache.hadoop.chukwa.datacollection.writer.hbase.HBaseWriter,org.apache.hadoop.chukwa.datacollection.writer.SeqFileWriter</value>
>> > ....
>> > However, in chukwaCollector.pipeline, we must write hbase.HBaseWriter
>> > ahead
>> > of writer.SeqFileWriter,  for in SeqFileWriter source code, i found
>> > that chunks will not pass to next writer. please verify that.
>> SeqFileWriter does not pass to next writer, this is the reason that it
>> has to be the last writer in the pipeline as a workaround.
>>
>> > 2. For HDFS, I want to store the data categorized by
>> > [dataType]/[yyyyMMdd]/[HH]/[mm]/.  in your last letter, you said that
>> > it's
>> > old design to have PostProcessor to extract metrics to store in DB and
>> > it
>> > should not be used now, then, what will we do to achieve aggregating the
>> > data into such category. Is there any new code to check in to solve it ?
>>
>> There is no new code written for inject MR data to DB or HBase,
>> patches are welcome.
>>
>> regards,
>> Eric
>>
>>
>> ________________________________
>> If you reply to this email, your message will be added to the discussion
>> below:
>>
>> http://apache-chukwa.679492.n3.nabble.com/hicc-start-problem-Unable-to-load-dashboard-tp4013192p4025023.html
>> To unsubscribe from hicc start problem:"Unable to load dashboard", click
>> here.
>> NAML
>
>
> 厚积薄发
>
> System.out.println("hello world again!")
>
>
> ________________________________
> View this message in context: Re: hicc start problem:"Unable to load
> dashboard"
> Sent from the Chukwa - Users mailing list archive at Nabble.com.
Loading...