A. hbase.rootdir 可以不配嗎
如果hbase.zookeeper.property.clientPort不配的話,將會默認一個埠,可能就不是你的zookeeper提供的3351~3353這些有用的埠。選一個埠配置即可。
hbase配置
在此路徑下新建zookeeper_data和hbase_tmp
》hbase-env.sh
export JAVA_HOME=/home/hadoop/tools/jdk1.6.0_27/
export HBASE_OPTS="-XX:+UseConcMarkSweepGC"
#export HBASE_MANAGES_ZK=true
這里如果我是用自己的zookeeper就把這句話注釋掉
》hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8000/hbase</value>
</property>
<property>
<name>hbase.master</name>
<value>localhost</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>3351</value>
</property>
<property>
<name>hbase.zookeeper.property.authProvider.1</name>
<value>org.apache.zookeeper.server.auth.SASLAuthenticationProvider</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hadoop/hbase-0.94.0-security/zookeeper_data</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/home/hadoop/hbase-0.94.0-security/hbase_tmp</value>
</property>
</configuration>
B. 大數據系統WebUI默認埠號速查
1、HDFS頁面:50070
2、YARN的管理界面:8088
3、HistoryServer的管理界面:19888
4、Zookeeper的服務埠號:2181
5、Mysql的服務埠號:3306
6、Hive.server1=10000
7、Kafka的服務埠號:9092
8、azkaban界面:8443
9、Hbase界面:16010,60010
10、Spark的界面:8080
11、Spark的URL:7077
C. 用happybase happy地查詢hbase數據
用happybase進行hbase中數據的增刪改查
前提:已經安裝happybase庫(pip install happybase),已有hbase環境並開啟thrift通訊埠(nohup hbase thrift start &),thrift默認埠為9090,10.10.30.200為hbase主機ip
scan方法:
參數:
row_start、row_stop:起始和終止rowkey,查詢兩rowkey間的數據
row_prefix:rowkey前綴。註:使用row_prefix的時候,row_start和row_stop不能使用
filter:要使用的過濾器(hbase 0.92版本及以上生效)
timestamp:按指定時間戳查詢
reverse:默認為False。為True時,scan結果按rowkey倒序排列
e.g:
put方法:
e.g:
△ 如put中的rowkey已存在,則為修改數據
delete方法:
row:刪除rowkey為row的數據
columns:指定columns參數時,刪除
e.g:
刪除rowkey為student2的name數據:
刪除成功:
batch方法:
1、批量操作
2、使用with管理批量
row方法及rows()方法,檢索指定rowkey的數據
檢索一條:
檢索多條:
返回結果:
e.g:
結果:
暫時就這些0v0
D. 大數據常用的埠(匯總)
Spark:
7077:spark的master與worker進行通訊的埠 standalone集群提交Application的埠
8080:master的WEB UI埠 資源調度
8081 : worker的WEB UI 埠 資源調度
4040 : Driver的WEB UI 埠 任務調度
18080:Spark History Server的WEB UI 埠
Zookeeper:
2181 : 客戶端連接zookeeper的埠
2888 : zookeeper集群內通訊使用,Leader監聽此埠
3888 : zookeeper埠 用於選舉leader
Hbase:
60010:Hbase的master的WEB UI埠
60030:Hbase的regionServer的WEB UI 管理埠
Hive:
9083:metastore服務默認監聽埠
10000:Hive 的JDBC埠
Kafka:
9092: Kafka集群節點之間通信的RPC埠
Redis:
6379:Redis服務埠
CDH:
7180:Cloudera Manager WebUI埠
7182: Cloudera Manager Server 與 Agent 通訊埠
HUE:
8888:Hue WebUI 埠
E. Hadoop默認埠表及用途
| 埠 用途 |
| 9000 | fs.defaultFS,如:hdfs://172.25.40.171:9000 |
| 9001 | dfs.namenode.rpc-address,DataNode會連接這個埠 |
| 50070 | dfs.namenode.http-address |
| 50470 | dfs.namenode.https-address |
| 50100 | dfs.namenode.backup.address |
| 50105 | dfs.namenode.backup.http-address |
| 50090 | dfs.namenode.secondary.http-address,如:172.25.39.166:50090 |
| 50091 | dfs.namenode.secondary.https-address,如:172.25.39.166:50091 |
| 50020 | dfs.datanode.ipc.address |
| 50075 | dfs.datanode.http.address |
| 50475 | dfs.datanode.https.address |
| 50010 | dfs.datanode.address,DataNode的數據傳輸埠 |
| 8480 | dfs.journalnode.rpc-address |
| 8481 | dfs.journalnode.https-address |
| 8032 | yarn.resourcemanager.address |
| 8088 | yarn.resourcemanager.webapp.address,YARN的http埠 |
| 8090 | yarn.resourcemanager.webapp.https.address |
| 8030 | yarn.resourcemanager.scheler.address |
| 8031 | yarn.resourcemanager.resource-tracker.address |
| 8033 | yarn.resourcemanager.admin.address |
| 8042 | yarn.nodemanager.webapp.address |
| 8040 | yarn.nodemanager.localizer.address |
| 8188 | yarn.timeline-service.webapp.address |
| 10020 | maprece.jobhistory.address |
| 19888 | maprece.jobhistory.webapp.address |
| 2888 | ZooKeeper,如果是Leader,用來監聽Follower的連接 |
| 3888 | ZooKeeper,用於Leader選舉 |
| 2181 | ZooKeeper,用來監聽客戶端的連接 |
| 60010 | hbase.master.info.port,HMaster的http埠 |
| 60000 | hbase.master.port,HMaster的RPC埠 |
| 60030 | hbase.regionserver.info.port,HRegionServer的http埠 |
| 60020 | hbase.regionserver.port,HRegionServer的RPC埠 |
| 8080 | hbase.rest.port,HBase REST server的埠 |
| 10000 | hive.server2.thrift.port |
| 9083 | hive.metastore.uris |
其中最常用的應該是50070和8088了
http://ip:50070/
WEB界面中監控任務執行狀況:
http://ip:8088/
F. windows下eclipse連接hbase失敗,如何解決跪求大神!!
第一種情況:
1.測試hbase:
a) cd hbase-0.90.4
b) bin/start-hbase.sh
c) bin/hbase shell
d) create 『database』,』cf』
e) list
f) 如果成功則可以看到有下面的結果:
hbase(main):001:0>list TABLE database 1 row(s)in 0.5910 seconds
2. 創建Java project, 將hbase-0.90.4下面的lib目錄拷貝到工程,將其中的jar包加入classpath, 還有hbase-0.90.5.jar 和 test.jar
3. 創建類
public class HelloHBase {
public static void main(String[] args) throws IOException {
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "192.168.128.128");
HBaseAdmin admin = new HBaseAdmin(conf);
HTableDescriptor tableDescriptor = admin.getTableDescriptor(Bytes.toBytes("database"));
byte[] name = tableDescriptor.getName();
System.out.println(new String(name));
HColumnDescriptor[] columnFamilies = tableDescriptor.getColumnFamilies();
for (HColumnDescriptor d : columnFamilies) {
System.out.println(d.getNameAsString());
}
}
運行,此時應該列印出下面兩行:
database cf
若沒有,說明配置失敗,請檢查其他設置。
==============================================
問題1:
java.net.ConnectException: Connection refused: no further information
a. zookeeper.ClientCnxn: Session 0x0 for server null,
解決: zppkeeper未啟動,或無法連接,從查看各節點zookeeper啟動狀態、埠佔用、防火牆等方面查看原因
b. getMaster attempt 4 of 10 failed; retrying after sleep of 2000
解決:查看 master log , 如果有信息org.apache.hadoop.hbase.regionserver.HRegionServer: Serving as BRDVM0240,43992,1373943529301, RPC listening on /127.0.0.1:43992, sessionid=0x13fe56a7d4b0001
則說明, HRegionServer
監聽的埠是localhost 127.0.0.1, 需要修改 server端 /etc/hosts 文件, 127.0.0.1
servername localhost.localdomain localhost
去掉 servername, 然後重啟hbase
第二種情況:
java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1119)
12/09/03 15:37:15 INFO zookeeper.ClientCnxn: Opening socket connection to server /192.168.0.118:2181
12/09/03 15:37:16 INFO zookeeper.ClientCnxn: EventThread shut down
12/09/03 15:37:16 INFO zookeeper.ZooKeeper: Session: 0x0 closed
Exception in thread "main" org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to connect to ZooKeeper but the connection closes immediately. This could be a sign that the server has too many connections (30 is the default). Consider inspecting your ZK server logs for that error and then make sure you are reusing HBaseConfiguration as often as you can. See HTable's javadoc for more information.
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:156)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1209)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:511)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:502)
at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:172)
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:92)
at com.biencloud.test.first_hbase.main(first_hbase.java:22)
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
at org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:809)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:837)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java:931)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:134)
... 6 more
這個錯誤說明eclipse沒有連接到zookeeper,在程序中添加zookeeper配置信息即可,具體如下:
Configuration conf=HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum","192.168.0.118, 192.168.0.186, 192.168.0.182");
conf.set("hbase.zookeeper.property.clientPort","2222");
附上出處鏈接:http://www.aboutyun.com/thread-5866-1-1.html
G. 怎樣查看hbase開啟那個埠
現在的默認埠是16010
H. hbase 需要開放哪些埠
class HbCli {
public:
// Constructor and Destructor
HbCli(const char *server, const char *port);
~HbCli();
// Util Functions
bool connect();
bool disconnect();
bool reconnect();
inline bool isconnect();
// HBase DDL Functions
bool createTable(const std::string table, const ColVec &columns);
bool deleteTable(const std::string table);
bool tableExists(const std::string table);
// HBase DML Functions
bool putRow(const std::string table, const std::string row, const std::string column, const std::string value);
bool putRowWithColumns(const std::string table, const std::string row, const StrMap columns);
bool putRows(const std::string table, const RowMap rows);
bool getRow(const std::string table, const std::string row, ResVec &rowResult);
bool getRowWithColumns(const std::string table, const std::string row, const StrVec columns, ResVec &rowResult);
bool getRows(const std::string table, const StrVec rows, ResVec &rowResult);
bool getRowsWithColumns(const std::string table, const StrVec rows, const StrVec columns, ResVec &rowResult);
bool delRow(const std::string table, const std::string row);
bool delRowWithColumn(const std::string table, const std::string row, const std::string column);
bool delRowWithColumns(const std::string table, const std::string row, const StrVec columns);
bool scan(const std::string table, const std::string startRow, StrVec columns, ResVec &values);
bool scanWithStop(const std::string table, const std::string startRow, const std::string stopRow, StrVec columns, ResVec &values);
// HBase Util Functions
void printRow(const ResVec &rowResult);
private:
boost::shared_ptr socket;
boost::shared_ptr transport;
boost::shared_ptr protocol;
HbaseClient client;
bool _is_connected;
};
I. Hbase無法正常啟動,hbase網頁打不開,HMaster啟動後自動關閉
這種情況下就去hbase的logs下查看日誌文件,我這里的文件是hbase-hadoop-master-centos01.log
發現報錯為
是hbase-site.xml中關於ZooKeeper的配置寫錯了,逗號寫成了.號,所以配置文件一定要細心。
重新執行之後仍然報錯
這是因為在hdfs中core-site.xml和hbase的hbase-site.xml配置的埠需要一致,而我配置得不一樣。
接下來檢查修改
檢查hdfs中配置:core-site.xml
與Hbase中配置:hbase-site.xml
這兩個配置的埠號8020必須一致,不然會報連不上的錯誤;
重新執行,報錯
是因為我切換到了root用戶,而我的hadoop是屬於hadoop用戶的,切換到hadoop用戶就解決了。或者也可以給root用戶授權。
J. zookeeper,hbase都啟動成功了,那是不是可以在瀏覽器訪問查看hbase的頁面
原則上是的,埠16010