A. hbase.rootdir 可以不配吗
如果hbase.zookeeper.property.clientPort不配的话,将会默认一个端口,可能就不是你的zookeeper提供的3351~3353这些有用的端口。选一个端口配置即可。
hbase配置
在此路径下新建zookeeper_data和hbase_tmp
》hbase-env.sh
export JAVA_HOME=/home/hadoop/tools/jdk1.6.0_27/
export HBASE_OPTS="-XX:+UseConcMarkSweepGC"
#export HBASE_MANAGES_ZK=true
这里如果我是用自己的zookeeper就把这句话注释掉
》hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8000/hbase</value>
</property>
<property>
<name>hbase.master</name>
<value>localhost</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>3351</value>
</property>
<property>
<name>hbase.zookeeper.property.authProvider.1</name>
<value>org.apache.zookeeper.server.auth.SASLAuthenticationProvider</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hadoop/hbase-0.94.0-security/zookeeper_data</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/home/hadoop/hbase-0.94.0-security/hbase_tmp</value>
</property>
</configuration>
B. 大数据系统WebUI默认端口号速查
1、HDFS页面:50070
2、YARN的管理界面:8088
3、HistoryServer的管理界面:19888
4、Zookeeper的服务端口号:2181
5、Mysql的服务端口号:3306
6、Hive.server1=10000
7、Kafka的服务端口号:9092
8、azkaban界面:8443
9、Hbase界面:16010,60010
10、Spark的界面:8080
11、Spark的URL:7077
C. 用happybase happy地查询hbase数据
用happybase进行hbase中数据的增删改查
前提:已经安装happybase库(pip install happybase),已有hbase环境并开启thrift通讯端口(nohup hbase thrift start &),thrift默认端口为9090,10.10.30.200为hbase主机ip
scan方法:
参数:
row_start、row_stop:起始和终止rowkey,查询两rowkey间的数据
row_prefix:rowkey前缀。注:使用row_prefix的时候,row_start和row_stop不能使用
filter:要使用的过滤器(hbase 0.92版本及以上生效)
timestamp:按指定时间戳查询
reverse:默认为False。为True时,scan结果按rowkey倒序排列
e.g:
put方法:
e.g:
△ 如put中的rowkey已存在,则为修改数据
delete方法:
row:删除rowkey为row的数据
columns:指定columns参数时,删除
e.g:
删除rowkey为student2的name数据:
删除成功:
batch方法:
1、批量操作
2、使用with管理批量
row方法及rows()方法,检索指定rowkey的数据
检索一条:
检索多条:
返回结果:
e.g:
结果:
暂时就这些0v0
D. 大数据常用的端口(汇总)
Spark:
7077:spark的master与worker进行通讯的端口 standalone集群提交Application的端口
8080:master的WEB UI端口 资源调度
8081 : worker的WEB UI 端口 资源调度
4040 : Driver的WEB UI 端口 任务调度
18080:Spark History Server的WEB UI 端口
Zookeeper:
2181 : 客户端连接zookeeper的端口
2888 : zookeeper集群内通讯使用,Leader监听此端口
3888 : zookeeper端口 用于选举leader
Hbase:
60010:Hbase的master的WEB UI端口
60030:Hbase的regionServer的WEB UI 管理端口
Hive:
9083:metastore服务默认监听端口
10000:Hive 的JDBC端口
Kafka:
9092: Kafka集群节点之间通信的RPC端口
Redis:
6379:Redis服务端口
CDH:
7180:Cloudera Manager WebUI端口
7182: Cloudera Manager Server 与 Agent 通讯端口
HUE:
8888:Hue WebUI 端口
E. Hadoop默认端口表及用途
| 端口 用途 |
| 9000 | fs.defaultFS,如:hdfs://172.25.40.171:9000 |
| 9001 | dfs.namenode.rpc-address,DataNode会连接这个端口 |
| 50070 | dfs.namenode.http-address |
| 50470 | dfs.namenode.https-address |
| 50100 | dfs.namenode.backup.address |
| 50105 | dfs.namenode.backup.http-address |
| 50090 | dfs.namenode.secondary.http-address,如:172.25.39.166:50090 |
| 50091 | dfs.namenode.secondary.https-address,如:172.25.39.166:50091 |
| 50020 | dfs.datanode.ipc.address |
| 50075 | dfs.datanode.http.address |
| 50475 | dfs.datanode.https.address |
| 50010 | dfs.datanode.address,DataNode的数据传输端口 |
| 8480 | dfs.journalnode.rpc-address |
| 8481 | dfs.journalnode.https-address |
| 8032 | yarn.resourcemanager.address |
| 8088 | yarn.resourcemanager.webapp.address,YARN的http端口 |
| 8090 | yarn.resourcemanager.webapp.https.address |
| 8030 | yarn.resourcemanager.scheler.address |
| 8031 | yarn.resourcemanager.resource-tracker.address |
| 8033 | yarn.resourcemanager.admin.address |
| 8042 | yarn.nodemanager.webapp.address |
| 8040 | yarn.nodemanager.localizer.address |
| 8188 | yarn.timeline-service.webapp.address |
| 10020 | maprece.jobhistory.address |
| 19888 | maprece.jobhistory.webapp.address |
| 2888 | ZooKeeper,如果是Leader,用来监听Follower的连接 |
| 3888 | ZooKeeper,用于Leader选举 |
| 2181 | ZooKeeper,用来监听客户端的连接 |
| 60010 | hbase.master.info.port,HMaster的http端口 |
| 60000 | hbase.master.port,HMaster的RPC端口 |
| 60030 | hbase.regionserver.info.port,HRegionServer的http端口 |
| 60020 | hbase.regionserver.port,HRegionServer的RPC端口 |
| 8080 | hbase.rest.port,HBase REST server的端口 |
| 10000 | hive.server2.thrift.port |
| 9083 | hive.metastore.uris |
其中最常用的应该是50070和8088了
http://ip:50070/
WEB界面中监控任务执行状况:
http://ip:8088/
F. windows下eclipse连接hbase失败,如何解决跪求大神!!
第一种情况:
1.测试hbase:
a) cd hbase-0.90.4
b) bin/start-hbase.sh
c) bin/hbase shell
d) create ‘database’,’cf’
e) list
f) 如果成功则可以看到有下面的结果:
hbase(main):001:0>list TABLE database 1 row(s)in 0.5910 seconds
2. 创建Java project, 将hbase-0.90.4下面的lib目录拷贝到工程,将其中的jar包加入classpath, 还有hbase-0.90.5.jar 和 test.jar
3. 创建类
public class HelloHBase {
public static void main(String[] args) throws IOException {
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "192.168.128.128");
HBaseAdmin admin = new HBaseAdmin(conf);
HTableDescriptor tableDescriptor = admin.getTableDescriptor(Bytes.toBytes("database"));
byte[] name = tableDescriptor.getName();
System.out.println(new String(name));
HColumnDescriptor[] columnFamilies = tableDescriptor.getColumnFamilies();
for (HColumnDescriptor d : columnFamilies) {
System.out.println(d.getNameAsString());
}
}
运行,此时应该打印出下面两行:
database cf
若没有,说明配置失败,请检查其他设置。
==============================================
问题1:
java.net.ConnectException: Connection refused: no further information
a. zookeeper.ClientCnxn: Session 0x0 for server null,
解决: zppkeeper未启动,或无法连接,从查看各节点zookeeper启动状态、端口占用、防火墙等方面查看原因
b. getMaster attempt 4 of 10 failed; retrying after sleep of 2000
解决:查看 master log , 如果有信息org.apache.hadoop.hbase.regionserver.HRegionServer: Serving as BRDVM0240,43992,1373943529301, RPC listening on /127.0.0.1:43992, sessionid=0x13fe56a7d4b0001
则说明, HRegionServer
监听的端口是localhost 127.0.0.1, 需要修改 server端 /etc/hosts 文件, 127.0.0.1
servername localhost.localdomain localhost
去掉 servername, 然后重启hbase
第二种情况:
java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
12/09/03 15:37:15 INFO zookeeper.ClientCnxn: Opening socket connection to server /192.168.0.118:2181
12/09/03 15:37:16 INFO zookeeper.ClientCnxn: EventThread shut down
12/09/03 15:37:16 INFO zookeeper.ZooKeeper: Session: 0x0 closed
Exception in thread "main" org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to connect to ZooKeeper but the connection closes immediately. This could be a sign that the server has too many connections (30 is the default). Consider inspecting your ZK server logs for that error and then make sure you are reusing HBaseConfiguration as often as you can. See HTable's javadoc for more information.
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:156)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1209)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:511)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:502)
at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:172)
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:92)
at com.biencloud.test.first_hbase.main(first_hbase.java:22)
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
at org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:809)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:837)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:931)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:134)
... 6 more
这个错误说明eclipse没有连接到zookeeper,在程序中添加zookeeper配置信息即可,具体如下:
Configuration conf=HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum","192.168.0.118, 192.168.0.186, 192.168.0.182");
conf.set("hbase.zookeeper.property.clientPort","2222");
附上出处链接:http://www.aboutyun.com/thread-5866-1-1.html
G. 怎样查看hbase开启那个端口
现在的默认端口是16010
H. hbase 需要开放哪些端口
class HbCli {
public:
// Constructor and Destructor
HbCli(const char *server, const char *port);
~HbCli();
// Util Functions
bool connect();
bool disconnect();
bool reconnect();
inline bool isconnect();
// HBase DDL Functions
bool createTable(const std::string table, const ColVec &columns);
bool deleteTable(const std::string table);
bool tableExists(const std::string table);
// HBase DML Functions
bool putRow(const std::string table, const std::string row, const std::string column, const std::string value);
bool putRowWithColumns(const std::string table, const std::string row, const StrMap columns);
bool putRows(const std::string table, const RowMap rows);
bool getRow(const std::string table, const std::string row, ResVec &rowResult);
bool getRowWithColumns(const std::string table, const std::string row, const StrVec columns, ResVec &rowResult);
bool getRows(const std::string table, const StrVec rows, ResVec &rowResult);
bool getRowsWithColumns(const std::string table, const StrVec rows, const StrVec columns, ResVec &rowResult);
bool delRow(const std::string table, const std::string row);
bool delRowWithColumn(const std::string table, const std::string row, const std::string column);
bool delRowWithColumns(const std::string table, const std::string row, const StrVec columns);
bool scan(const std::string table, const std::string startRow, StrVec columns, ResVec &values);
bool scanWithStop(const std::string table, const std::string startRow, const std::string stopRow, StrVec columns, ResVec &values);
// HBase Util Functions
void printRow(const ResVec &rowResult);
private:
boost::shared_ptr socket;
boost::shared_ptr transport;
boost::shared_ptr protocol;
HbaseClient client;
bool _is_connected;
};
I. Hbase无法正常启动,hbase网页打不开,HMaster启动后自动关闭
这种情况下就去hbase的logs下查看日志文件,我这里的文件是hbase-hadoop-master-centos01.log
发现报错为
是hbase-site.xml中关于ZooKeeper的配置写错了,逗号写成了.号,所以配置文件一定要细心。
重新执行之后仍然报错
这是因为在hdfs中core-site.xml和hbase的hbase-site.xml配置的端口需要一致,而我配置得不一样。
接下来检查修改
检查hdfs中配置:core-site.xml
与Hbase中配置:hbase-site.xml
这两个配置的端口号8020必须一致,不然会报连不上的错误;
重新执行,报错
是因为我切换到了root用户,而我的hadoop是属于hadoop用户的,切换到hadoop用户就解决了。或者也可以给root用户授权。
J. zookeeper,hbase都启动成功了,那是不是可以在浏览器访问查看hbase的页面
原则上是的,端口16010