安装Hue后的一些功能的问题解决干货总结(博主推荐)
?
?
?
不多說,直接上干貨!
我的集群機(jī)器情況是?bigdatamaster(192.168.80.10)、bigdataslave1(192.168.80.11)和bigdataslave2(192.168.80.12)
然后,安裝目錄是在/home/hadoop/app下。
?
官方建議在master機(jī)器上安裝Hue,我這里也不例外。安裝在bigdatamaster機(jī)器上。
?
Hue版本:hue-3.9.0-cdh5.5.4 需要編譯才能使用(聯(lián)網(wǎng))說給大家的話:大家電腦的配置好的話,一定要安裝cloudera manager。畢竟是一家人的。
同時(shí),我也親身經(jīng)歷過,會(huì)有部分組件版本出現(xiàn)問題安裝起來要個(gè)大半天時(shí)間去排除,做好心里準(zhǔn)備。廢話不多說,因?yàn)槲夷壳白x研,自己筆記本電腦最大8G,只能玩手動(dòng)來練手。
純粹是為了給身邊沒高配且條件有限的學(xué)生黨看的! 但我已經(jīng)在實(shí)驗(yàn)室機(jī)器群里搭建好cloudera manager 以及 ambari都有。
大數(shù)據(jù)領(lǐng)域兩大最主流集群管理工具Ambari和Cloudera Manger
Cloudera安裝搭建部署大數(shù)據(jù)集群(圖文分五大步詳解)(博主強(qiáng)烈推薦)
Ambari安裝搭建部署大數(shù)據(jù)集群(圖文分五大步詳解)(博主強(qiáng)烈推薦)
?
?
?
?
?
?問題一:
1、HUE中Hive?查詢有問題,頁(yè)面報(bào)錯(cuò):Could not connect to localhost:10000 ?或者 ? Could not connect to bigdatamaster:10000
解決方法:
在安裝的HIVE中啟動(dòng)hiveserver2 &,因?yàn)槎丝谔?hào)10000是hiveserver2服務(wù)的端口號(hào),否則,Hue Web 控制無法執(zhí)行HIVE 查詢。
bigdatamaster是我機(jī)器名。
?
在$HIVE_HOME下
bin/hive -–service hiveserver2 & [hadoop@bigdatamaster ~]$ cd $HIVE_HOME [hadoop@bigdatamaster hive]$ bin/hive --service hiveserver2 &?
大家,注意,以下是我的hive-site.xml里的配置信息
該問題,成功解決。
?
?
?
?
?
?
問題二:
database is locked
?
?
?這應(yīng)該是hue默認(rèn)的SQLite數(shù)據(jù)庫(kù)出現(xiàn)錯(cuò)誤,你可以使用mysql postgresql等來替換
https://www.cloudera.com/documentation/enterprise/5-5-x/topics/admin_hue_ext_db.html(這是官網(wǎng))
?
?
同時(shí),參考(見https://my.oschina.net/aibati2008/blog/647493)
這篇博客:安裝配置和使用hue遇到的問題匯總
# Configuration options for specifying the Desktop Database. For more info,# see http://docs.djangoproject.com/en/1.4/ref/settings/#database-engine# ------------------------------------------------------------------------[[database]]# Database engine is typically one of:# postgresql_psycopg2, mysql, sqlite3 or oracle.## Note that for sqlite3, 'name', below is a path to the filename. For other backends, it is the database name.# Note for Oracle, options={"threaded":true} must be set in order to avoid crashes.# Note for Oracle, you can use the Oracle Service Name by setting "port=0" and then "name=<host>:<port>/<service_name>".# Note for MariaDB use the 'mysql' engine.## engine=sqlite3## host=## port=## user=## password=## name=desktop/desktop.db## options={}? 以上是默認(rèn)的。
hue默認(rèn)使用sqlite作為元數(shù)據(jù)庫(kù),不推薦在生產(chǎn)環(huán)境中使用。會(huì)經(jīng)常出現(xiàn)database is lock的問題。
?
解決方法:
其實(shí)官網(wǎng)也有解決方法,不過過程似乎有點(diǎn)問題。而且并不適合3.7之后的版本。我現(xiàn)在使用的是3.11,以下是總結(jié)的最快的切換方法。
[root@bigdatamaster hadoop]# mysql -uhive -phive Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 49 Server version: 5.1.73 Source distributionCopyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | hive | | mysql | | oozie | | test | +--------------------+ 5 rows in set (0.07 sec)mysql>
?
因?yàn)?#xff0c;我這里是,用戶為hive,密碼也為hive,然后,數(shù)據(jù)庫(kù)也為hive,所以如下:
# Configuration options for specifying the Desktop Database. For more info,# see http://docs.djangoproject.com/en/1.4/ref/settings/#database-engine# ------------------------------------------------------------------------[[database]]# Database engine is typically one of:# postgresql_psycopg2, mysql, sqlite3 or oracle.## Note that for sqlite3, 'name', below is a path to the filename. For other backends, it is the database name.# Note for Oracle, options={"threaded":true} must be set in order to avoid crashes.# Note for Oracle, you can use the Oracle Service Name by setting "port=0" and then "name=<host>:<port>/<service_name>".# Note for MariaDB use the 'mysql' engine.engine=mysqlhost=bigdatamasterport=3306user=hivepassword=hivename=hive## options={}?
? 然后,重啟hue進(jìn)程
[hadoop@bigdatamaster hue]$ build/env/bin/supervisor?
?完成以上的這個(gè)配置,啟動(dòng)Hue,通過瀏覽器訪問,會(huì)發(fā)生錯(cuò)誤,原因是mysql數(shù)據(jù)庫(kù)沒有被初始化
DatabaseError: (1146, "Table 'hue.desktop_settings' doesn't exist")
或者
ProgrammingError: (1146, "Table 'hive.django_session' doesn't exist")?
?
? 初始化數(shù)據(jù)庫(kù)
/home/hadoop/app/hue-3.9.0-cdh5.5.4/build/env bin/hue syncdb bin/hue migrate執(zhí)行完以后,可以在mysql中看到,hue相應(yīng)的表已經(jīng)生成。
啟動(dòng)hue, 能夠正常訪問了。
?
?
?
? 或者
? 當(dāng)然,大家這里,可以先在mysql里面創(chuàng)建數(shù)據(jù)庫(kù)。命名為hue,并且是以hadoop用戶和hadoop密碼。
首先,
[root@master app]# mysql -uroot -prootroot mysql> create user 'hue' identified by 'hue'; //創(chuàng)建一個(gè)賬號(hào):用戶名為hue,密碼為hue或者 mysql> create user 'hue'@'%' identified by 'hue'; //創(chuàng)建一個(gè)賬號(hào):用戶名為hue,密碼為hue 然后
mysql> GRANT ALL PRIVILEGES ON *.* to 'hue'@'%' IDENTIFIED BY 'hue' WITH GRANT OPTION; //將權(quán)限授予host為%即所有主機(jī)的hue用戶 mysql> GRANT ALL PRIVILEGES ON *.* to 'hue'@'bigdatamaster' IDENTIFIED BY 'hue' WITH GRANT OPTION; //將權(quán)限授予host為master的hue用戶 mysql> GRANT ALL PRIVILEGES ON *.* to 'hue'@'localhost' IDENTIFIED BY 'hue' WITH GRANT OPTION; //將權(quán)限授予host為localhost的hue用戶(其實(shí)這一步可以不配)
?
?
mysql> GRANT ALL PRIVILEGES ON *.* to 'hue'@'bigdatamaster' IDENTIFIED BY 'hue' WITH GRANT OPTION; Query OK, 0 rows affected (0.00 sec)mysql> flush privileges; Query OK, 0 rows affected (0.00 sec)mysql> select user,host,password from mysql.user; +-------+---------------+-------------------------------------------+ | user | host | password | +-------+---------------+-------------------------------------------+ | root | localhost | | | root | bigdatamaster | | | root | 127.0.0.1 | | | | localhost | | | | bigdatamaster | | | hive | % | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC | | hive | bigdatamaster | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC | | hive | localhost | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC | | oozie | % | *2B03FE0359FAD3B80620490CE614F8622E0828CD | | oozie | bigdatamaster | *2B03FE0359FAD3B80620490CE614F8622E0828CD | | oozie | localhost | *2B03FE0359FAD3B80620490CE614F8622E0828CD | | hue | % | *15221DE9A04689C4D312DEAC3B87DDF542AF439E | | hue | localhost | *15221DE9A04689C4D312DEAC3B87DDF542AF439E | | hue | bigdatamaster | *15221DE9A04689C4D312DEAC3B87DDF542AF439E | +-------+---------------+-------------------------------------------+ 15 rows in set (0.00 sec)mysql> exit; Bye [root@bigdatamaster hadoop]#?
?
?
# Configuration options for specifying the Desktop Database. For more info,# see http://docs.djangoproject.com/en/1.4/ref/settings/#database-engine# ------------------------------------------------------------------------[[database]]# Database engine is typically one of:# postgresql_psycopg2, mysql, sqlite3 or oracle.## Note that for sqlite3, 'name', below is a path to the filename. For other backends, it is the database name.# Note for Oracle, options={"threaded":true} must be set in order to avoid crashes.# Note for Oracle, you can use the Oracle Service Name by setting "port=0" and then "name=<host>:<port>/<service_name>".# Note for MariaDB use the 'mysql' engine.engine=mysqlhost=bigdatamasterport=3306user=huepassword=huename=hue## options={}完成以上的這個(gè)配置,啟動(dòng)Hue,通過瀏覽器訪問,會(huì)發(fā)生錯(cuò)誤。比如如下
如果大家遇到這個(gè)問題,別忘記還要?jiǎng)?chuàng)建數(shù)據(jù)庫(kù)命名為hue。
OperationalError: (1049, "Unknown database 'hue'")?
mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | hive | | mysql | | oozie | | test | +--------------------+ 5 rows in set (0.00 sec)mysql> CREATE DATABASE hue; Query OK, 1 row affected (0.49 sec)mysql> flush privileges; Query OK, 0 rows affected (0.00 sec)mysql> exit; Bye [root@bigdatamaster hadoop]#?
?
?
啟動(dòng)hue之后,比如如下的錯(cuò)誤,原因是mysql數(shù)據(jù)庫(kù)沒有被初始化。
ProgrammingError: (1146, "Table 'hue.django_session' doesn't exist")?
?
則初始化數(shù)據(jù)庫(kù)
cd /home/hadoop/app/hue-3.9.0-cdh5.5.4/build/env bin/hue syncdb bin/hue migrate
具體如下(這里一定要注意啦!!!先看完,再動(dòng)手)
?
?
這里,大家一定要注意啊,如果這里你輸入的是默認(rèn)提示hadoop,則在登錄的時(shí)候就是為hadoop啦。
?
?
?
?
? 當(dāng)然若這里,大家弄錯(cuò)了的話,還可以如我下面這樣進(jìn)行來彌補(bǔ)。
第一步:
?
?
?
?
?
?
?
?
?
?
?
?
?
所以,這樣下來,不太好。
?
?
?
?
?
?
? 所以我這里,為了避免這個(gè)情況發(fā)生,直接輸入用戶名為hue,密碼也是為hue。
[hadoop@bigdatamaster env]$ pwd /home/hadoop/app/hue-3.9.0-cdh5.5.4/build/env [hadoop@bigdatamaster env]$ ll total 12 drwxrwxr-x 2 hadoop hadoop 4096 May 5 20:59 bin drwxrwxr-x 2 hadoop hadoop 4096 May 5 20:46 include drwxrwxr-x 3 hadoop hadoop 4096 May 5 20:46 lib lrwxrwxrwx 1 hadoop hadoop 3 May 5 20:46 lib64 -> lib -rw-rw-r-- 1 hadoop hadoop 0 May 5 20:46 stamp [hadoop@bigdatamaster env]$ bin/hue syncdb Syncing... Creating tables ... Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_groups Creating table auth_user_user_permissions Creating table auth_user Creating table django_openid_auth_nonce Creating table django_openid_auth_association Creating table django_openid_auth_useropenid Creating table django_content_type Creating table django_session Creating table django_site Creating table django_admin_log Creating table south_migrationhistory Creating table axes_accessattempt Creating table axes_accesslogYou just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (leave blank to use 'hadoop'): hue Email address: Password: hue Password (again): Superuser created successfully. Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s)Synced:> django.contrib.auth> django_openid_auth> django.contrib.contenttypes> django.contrib.sessions> django.contrib.sites> django.contrib.staticfiles> django.contrib.admin> south> axes> about> filebrowser> help> impala> jobbrowser> metastore> proxy> rdbms> zookeeper> indexerNot synced (use migrations):- django_extensions- desktop- beeswax- hbase- jobsub- oozie- pig- search- security- spark- sqoop- useradmin (use ./manage.py migrate to migrate these) [hadoop@bigdatamaster env]$?
?
?
然后,再
[hadoop@bigdatamaster env]$ pwd /home/hadoop/app/hue-3.9.0-cdh5.5.4/build/env [hadoop@bigdatamaster env]$ bin/hue migrate Running migrations for django_extensions:- Migrating forwards to 0001_empty.> django_extensions:0001_empty- Loading initial data for django_extensions. Installed 0 object(s) from 0 fixture(s) Running migrations for desktop:- Migrating forwards to 0016_auto__add_unique_document2_uuid_version_is_history.> pig:0001_initial> oozie:0001_initial> oozie:0002_auto__add_hive> oozie:0003_auto__add_sqoop> oozie:0004_auto__add_ssh> oozie:0005_auto__add_shell> oozie:0006_auto__chg_field_java_files__chg_field_java_archives__chg_field_sqoop_f> oozie:0007_auto__chg_field_sqoop_script_path> oozie:0008_auto__add_distcp> oozie:0009_auto__add_decision> oozie:0010_auto__add_fs> oozie:0011_auto__add_email> oozie:0012_auto__add_subworkflow__chg_field_email_subject__chg_field_email_body> oozie:0013_auto__add_generic> oozie:0014_auto__add_decisionend> oozie:0015_auto__add_field_dataset_advanced_start_instance__add_field_dataset_ins> oozie:0016_auto__add_field_coordinator_job_properties> oozie:0017_auto__add_bundledcoordinator__add_bundle> oozie:0018_auto__add_field_workflow_managed> oozie:0019_auto__add_field_java_capture_output> oozie:0020_chg_large_varchars_to_textfields> oozie:0021_auto__chg_field_java_args__add_field_job_is_trashed> oozie:0022_auto__chg_field_mapreduce_node_ptr__chg_field_start_node_ptr> oozie:0022_change_examples_path_format- Migration 'oozie:0022_change_examples_path_format' is marked for no-dry-run.> oozie:0023_auto__add_field_node_data__add_field_job_data> oozie:0024_auto__chg_field_subworkflow_sub_workflow> oozie:0025_change_examples_path_format- Migration 'oozie:0025_change_examples_path_format' is marked for no-dry-run.> desktop:0001_initial> desktop:0002_add_groups_and_homedirs> desktop:0003_group_permissions> desktop:0004_grouprelations> desktop:0005_settings> desktop:0006_settings_add_tour> beeswax:0001_initial> beeswax:0002_auto__add_field_queryhistory_notify> beeswax:0003_auto__add_field_queryhistory_server_name__add_field_queryhistory_serve> beeswax:0004_auto__add_session__add_field_queryhistory_server_type__add_field_query> beeswax:0005_auto__add_field_queryhistory_statement_number> beeswax:0006_auto__add_field_session_application> beeswax:0007_auto__add_field_savedquery_is_trashed> beeswax:0008_auto__add_field_queryhistory_query_type> desktop:0007_auto__add_documentpermission__add_documenttag__add_document /home/hadoop/app/hue-3.9.0-cdh5.5.4/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/backends/mysql/base.py:124: Warning: Some non-transactional changed tables couldn't be rolled backreturn self.cursor.execute(query, args)> desktop:0008_documentpermission_m2m_tables> desktop:0009_auto__chg_field_document_name> desktop:0010_auto__add_document2__chg_field_userpreferences_key__chg_field_userpref> desktop:0011_auto__chg_field_document2_uuid> desktop:0012_auto__chg_field_documentpermission_perms> desktop:0013_auto__add_unique_documenttag_owner_tag> desktop:0014_auto__add_unique_document_content_type_object_id> desktop:0015_auto__add_unique_documentpermission_doc_perms> desktop:0016_auto__add_unique_document2_uuid_version_is_history- Loading initial data for desktop. Installed 0 object(s) from 0 fixture(s) Running migrations for beeswax:- Migrating forwards to 0013_auto__add_field_session_properties.> beeswax:0009_auto__add_field_savedquery_is_redacted__add_field_queryhistory_is_reda> beeswax:0009_auto__chg_field_queryhistory_server_port> beeswax:0010_merge_database_state> beeswax:0011_auto__chg_field_savedquery_name> beeswax:0012_auto__add_field_queryhistory_extra> beeswax:0013_auto__add_field_session_properties- Loading initial data for beeswax. Installed 0 object(s) from 0 fixture(s) Running migrations for hbase:- Migrating forwards to 0001_initial.> hbase:0001_initial- Loading initial data for hbase. Installed 0 object(s) from 0 fixture(s) Running migrations for jobsub:- Migrating forwards to 0006_chg_varchars_to_textfields.> jobsub:0001_initial> jobsub:0002_auto__add_ooziestreamingaction__add_oozieaction__add_oozieworkflow__ad> jobsub:0003_convertCharFieldtoTextField> jobsub:0004_hue1_to_hue2- Migration 'jobsub:0004_hue1_to_hue2' is marked for no-dry-run.> jobsub:0005_unify_with_oozie- Migration 'jobsub:0005_unify_with_oozie' is marked for no-dry-run.> jobsub:0006_chg_varchars_to_textfields- Loading initial data for jobsub. Installed 0 object(s) from 0 fixture(s) Running migrations for oozie:- Migrating forwards to 0027_auto__chg_field_node_name__chg_field_job_name.> oozie:0026_set_default_data_values- Migration 'oozie:0026_set_default_data_values' is marked for no-dry-run.> oozie:0027_auto__chg_field_node_name__chg_field_job_name- Loading initial data for oozie. Installed 0 object(s) from 0 fixture(s) Running migrations for pig: - Nothing to migrate.- Loading initial data for pig. Installed 0 object(s) from 0 fixture(s) Running migrations for search:- Migrating forwards to 0003_auto__add_field_collection_owner.> search:0001_initial> search:0002_auto__del_core__add_collection> search:0003_auto__add_field_collection_owner- Loading initial data for search. Installed 0 object(s) from 0 fixture(s) ? You have no migrations for the 'security' app. You might want some. Running migrations for spark:- Migrating forwards to 0001_initial.> spark:0001_initial- Loading initial data for spark. Installed 0 object(s) from 0 fixture(s) Running migrations for sqoop:- Migrating forwards to 0001_initial.> sqoop:0001_initial- Loading initial data for sqoop. Installed 0 object(s) from 0 fixture(s) Running migrations for useradmin:- Migrating forwards to 0006_auto__add_index_userprofile_last_activity.> useradmin:0001_permissions_and_profiles- Migration 'useradmin:0001_permissions_and_profiles' is marked for no-dry-run.> useradmin:0002_add_ldap_support- Migration 'useradmin:0002_add_ldap_support' is marked for no-dry-run.> useradmin:0003_remove_metastore_readonly_huepermission- Migration 'useradmin:0003_remove_metastore_readonly_huepermission' is marked for no-dry-run.> useradmin:0004_add_field_UserProfile_first_login> useradmin:0005_auto__add_field_userprofile_last_activity> useradmin:0006_auto__add_index_userprofile_last_activity- Loading initial data for useradmin. Installed 0 object(s) from 0 fixture(s) [hadoop@bigdatamaster env]$?
?
?
執(zhí)行完以后,可以在mysql中看到,hue相應(yīng)的表已經(jīng)生成。
mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | hive | | hue | | mysql | | oozie | | test | +--------------------+ 6 rows in set (0.06 sec)mysql> use hue; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -ADatabase changed mysql> show tables; +--------------------------------+ | Tables_in_hue | +--------------------------------+ | auth_group | | auth_group_permissions | | auth_permission | | auth_user | | auth_user_groups | | auth_user_user_permissions | | axes_accessattempt | | axes_accesslog | | beeswax_metainstall | | beeswax_queryhistory | | beeswax_savedquery | | beeswax_session | | desktop_document | | desktop_document2 | | desktop_document2_dependencies | | desktop_document2_tags | | desktop_document_tags | | desktop_documentpermission | | desktop_documenttag | | desktop_settings | | desktop_userpreferences | | django_admin_log | | django_content_type | | django_openid_auth_association | | django_openid_auth_nonce | | django_openid_auth_useropenid | | django_session | | django_site | | documentpermission_groups | | documentpermission_users | | jobsub_checkforsetup | | jobsub_jobdesign | | jobsub_jobhistory | | jobsub_oozieaction | | jobsub_ooziedesign | | jobsub_ooziejavaaction | | jobsub_ooziemapreduceaction | | jobsub_ooziestreamingaction | | oozie_bundle | | oozie_bundledcoordinator | | oozie_coordinator | | oozie_datainput | | oozie_dataoutput | | oozie_dataset | | oozie_decision | | oozie_decisionend | | oozie_distcp | | oozie_email | | oozie_end | | oozie_fork | | oozie_fs | | oozie_generic | | oozie_history | | oozie_hive | | oozie_java | | oozie_job | | oozie_join | | oozie_kill | | oozie_link | | oozie_mapreduce | | oozie_node | | oozie_pig | | oozie_shell | | oozie_sqoop | | oozie_ssh | | oozie_start | | oozie_streaming | | oozie_subworkflow | | oozie_workflow | | pig_document | | pig_pigscript | | search_collection | | search_facet | | search_result | | search_sorting | | south_migrationhistory | | useradmin_grouppermission | | useradmin_huepermission | | useradmin_ldapgroup | | useradmin_userprofile | +--------------------------------+ 80 rows in set (0.00 sec)mysql>?
?
?
啟動(dòng)hue, 能夠正常訪問了。
?
[hadoop@bigdatamaster hue-3.9.0-cdh5.5.4]$ pwd /home/hadoop/app/hue-3.9.0-cdh5.5.4 [hadoop@bigdatamaster hue-3.9.0-cdh5.5.4]$ build/env/bin/supervisor?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
問題三:
Error loading MySQLdb module: libmysqlclient_r.so.16: cannot open shared object file: No such file or directory
?
解決辦法
把安裝mysql 的虛擬機(jī)里找一個(gè)直接把libmysqlclient.so.18這個(gè)文件拷貝到系統(tǒng)指定的/usr/lib64庫(kù)文件目錄中。
大家,注意,以下是我的hive-site.xml配置信息,我的hive是安在bigdatamaster機(jī)器上,
?
我的這里配置是,
# Database options to send to the server when connecting.# https://docs.djangoproject.com/en/1.4/ref/databases/## options={}# mysql, oracle, or postgresql configuration.[[[mysql]]]# Name to show in the UI.nice_name="My SQL DB"# For MySQL and PostgreSQL, name is the name of the database.# For Oracle, Name is instance of the Oracle server. For express edition# this is 'xe' by default.name=hive# Database backend to use. This can be:# 1. mysql# 2. postgresql# 3. oracleengine=mysql# IP or hostname of the database to connect to.host=bigdatamaster# Port the database server is listening to. Defaults are:# 1. MySQL: 3306# 2. PostgreSQL: 5432# 3. Oracle Express Edition: 1521port=3306# Username to authenticate with when connecting to the database.user=hive# Password matching the username to authenticate with when# connecting to the database.password=hive# Database options to send to the server when connecting.# https://docs.djangoproject.com/en/1.4/ref/databases/## options={}?
該問題成功得到解決!
?
?
?
問題四(跟博文的問題十五一樣)
點(diǎn)擊“File Browser”報(bào)錯(cuò):
Cannot access:/user/admin."注:您是hue管理員,但不是HDFS超級(jí)用戶(即“”HDFS“”)
解決方法:
在$HADOOP_HOME的etc/hadoop中編輯core-site.xml文件,增加?
<property><name>hadoop.proxyuser.oozie.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.ozzie.groups</name><value>*</value></property> <property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property>然后重啟hadoop,stop-all.sh----->start-all.sh即可。
?
該問題成功得到解決!
?
? ? ? 我一般配置是如下,在$HADOOP_HOME/etc/hadoop/下的core-site.xml里
<property><name>hadoop.proxyuser.hadoop.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.hadoop.groups</name><value>*</value></property><property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property><property><name>hadoop.proxyuser.hdfs.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.hdfs.groups</name><value>*</value></property>為什么要這么加,是因?yàn)?#xff0c;我有三個(gè)用戶,
這里,大家根據(jù)自己的實(shí)情去增加,修改完之后,一定要重啟sbin/start-all.sh,就能解決問題了。
?
?
?
?
?
?
問題五
在Hue對(duì)HBase集成時(shí), HBase ?Browser 里出現(xiàn)?Api Error: TSocket read 0 bytes
?
?
解決辦法
? https://stackoverflow.com/questions/20415493/api-error-tsocket-read-0-bytes-when-using-hue-with-hbase
Add this to your hbase "core-site.conf":
<property><name>hbase.thrift.support.proxyuser</name><value>true</value> </property><property><name>hbase.regionserver.thrift.http</name><value>true</value> </property>即可,解決問題。
?
?
?
?
?
?
?
?
問題六:
?User: hadoop is not allowed to impersonate hue
Api 錯(cuò)誤:<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/> <title>Error 500 User: hadoop is not allowed to impersonate hue</title> </head> <body><h2>HTTP ERROR 500</h2> <p>Problem accessing /. Reason: <pre> User: hadoop is not allowed to impersonate hue</pre></p><h3>Caused by:</h3><pre>javax.servlet.ServletException: User: hadoop is not allowed to impersonate hue at org.apache.hadoop.hbase.thrift.ThriftHttpServlet.doPost(ThriftHttpServlet.java:117) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelE
?
?
? 解決辦法
? $HADOOP_HOME/ect/hadoop的core-site.xml里
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>
?
?改為
?
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>hadoop</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>hadoop</value>
</property>
?
?
?
?
問題七
?Api 錯(cuò)誤:集群配置 (Cluster|bigdatamaster:9090 的格式設(shè)置不正確。
?
?
?
?
?
?
改為
?
?
?
?
?
?
問題八:
在hue里面查看HDFS文件瀏覽器報(bào)錯(cuò):
當(dāng)前用戶沒有權(quán)限查看,?
cause:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
?
解決方案:
Web頁(yè)面查看兩個(gè)NameNode狀態(tài),是不是之前的namenode是standby狀態(tài)了. 我現(xiàn)有的集群就是這種情況, 由于之前的服務(wù)是master1起的, 掛了之后自動(dòng)切換到master2, 但是hue中webhdfs還是配置的master1,導(dǎo)致在hue中沒有訪問權(quán)限.
?
?
?
問題九
hive查詢時(shí)報(bào)錯(cuò)
org.apache.hive.service.cli.HiveSQLException: Couldn't find log associated with operation handle: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=b3d05ca6-e3e8-4bef-b869-0ea0732c3ac5]
?
解決方案:
將hive-site.xml中的hive.server2.logging.operation.enabled=true;
<property><name>hive.server2.logging.operation.enabled</name><value>true</value></property>?
?
?
?
問題十
啟動(dòng)hue web端 報(bào)錯(cuò)誤:OperationalError: attempt to write a readonly database
?
解決辦法
啟動(dòng)hue server的用戶沒有權(quán)限去寫入默認(rèn)sqlite DB,同時(shí)確保安裝目錄下所有文件的owner都是hadoop用戶
chown -R hadoop:hadoop hue-3.9.0-cdh5.5.4?
?
?
問題十一
HUE 報(bào)錯(cuò)誤:Filesystem root ‘/’ should be owned by ‘hdfs’
hue 文件系統(tǒng)根目錄“/”應(yīng)歸屬于“hdfs”
?
解決方法
修改 文件desktop/libs/hadoop/src/hadoop/fs/webhdfs.py 中的 ?DEFAULT_HDFS_SUPERUSER = ‘hdfs’ ?更改為你的hadoop用戶
?
?
?
?
?
問題十二
錯(cuò)誤:hbase-site.xml 配置文件中缺少 kerberos 主體名稱。
? 解決辦法
?
?
?
?
問題十三
Hue下無法無法正確連接到 Zookeeper ?timed out
?
解決辦法
說明你的zookeeper模塊,還沒配置完全。
HUE配置文件hue.ini 的zookeeper模塊詳解(圖文詳解)(分HA集群)
?
?
?
問題十四
Sqoop 錯(cuò)誤:?
?
? 解決辦法
看下自己的sqoop版本是不是,不是Sqoop2版本。好比我的如下
?
?
?
大家要注意,Hue里是只支持Sqoop2版本,對(duì)于Sqoop1和Sqoop2版本,直接去官網(wǎng)看就得了。
?
?
? 那么,得更換sqoop版本。
?
sqoop2-1.99.5-cdh5.5.4.tar.gz的部署搭建
?
?
?
?
?
問題十五(見本博客的問題四)
無法訪問:/user/hadoop。 Note: you are a Hue admin but not a HDFS superuser, "hdfs" or part of HDFS supergroup, "supergroup".
SecurityException: Failed to obtain user group information: org.apache.hadoop.security.authorize.AuthorizationException: User: hue is not allowed to impersonate hadoop (error 403)
?
?
?
?
?問題分析
那是因?yàn)?#xff0c;HUE安裝完成之后,第一次登錄的用戶就是HUE的超級(jí)用戶,可以管理用戶,等等。但是在用的過程發(fā)現(xiàn)一個(gè)問題這個(gè)用戶不能管理HDFS中由supergroup創(chuàng)建的數(shù)據(jù)。
雖然在HUE中創(chuàng)建的用戶可以管理自己文件夾下面的數(shù)據(jù)/user/XXX。那么Hadoop superuser的數(shù)據(jù)怎么管理呢,HUE提供了一個(gè)功能就是將Unix的用戶和Hue集成,這樣用Hadoop superuser的用戶登錄到HUE中就能順利的管理數(shù)據(jù)了。
?
下面幾個(gè)步驟來進(jìn)行集成
第一步:確保hadoop 這個(gè)用戶組在系統(tǒng)之中(這個(gè)hadoop肯定是在系統(tǒng)中了)
?
?
第二步:運(yùn)行下面命令
?
[hadoop@bigdatamaster env]$ pwd /home/hadoop/app/hue-3.9.0-cdh5.5.4/build/env [hadoop@bigdatamaster env]$ ll total 16 drwxrwxr-x 2 hadoop hadoop 4096 May 5 20:59 bin drwxrwxr-x 2 hadoop hadoop 4096 May 5 20:46 include drwxrwxr-x 3 hadoop hadoop 4096 May 5 20:46 lib lrwxrwxrwx 1 hadoop hadoop 3 May 5 20:46 lib64 -> lib drwxrwxr-x 2 hadoop hadoop 4096 Aug 2 17:20 logs -rw-rw-r-- 1 hadoop hadoop 0 May 5 20:46 stamp [hadoop@bigdatamaster env]$ bin/hue useradmin_sync_with_unix [hadoop@bigdatamaster env]$?
?
? 第三步:
運(yùn)行完上面的命令,進(jìn)入到HUE總你就會(huì)發(fā)現(xiàn)用戶已經(jīng)集成進(jìn)來了,但是,沒有密碼,所以需要給Unix的用戶設(shè)定密碼和分配用戶組。
?
?
?
? 這里,
?
?
或者
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?完成上述步驟之后,登陸進(jìn)去,就能愉快地管理HDFS數(shù)據(jù)了。
?
?
HUE新建HDFS目錄
問題描述: you are a Hue admin but not a HDFS superuser, “hdfs” or part of HDFS supergroup, “supergroup”
解決方案:在hue中新增hdfs用戶,以hdfs用戶登錄創(chuàng)建目錄和上傳文件
參考
https://geosmart.github.io/2015/10/27/CDH%E4%BD%BF%E7%94%A8%E9%97%AE%E9%A2%98%E8%AE%B0%E5%BD%95/
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
然后,其實(shí)還是沒有成功,
說白了,這個(gè)問題還是要看本博文的問題四
?
?
?
?
? ? 我一般配置是如下,在$HADOOP_HOME/etc/hadoop/下的core-site.xml里
<property><name>hadoop.proxyuser.hadoop.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.hadoop.groups</name><value>*</value></property><property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property><property><name>hadoop.proxyuser.hdfs.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.hdfs.groups</name><value>*</value></property>為什么要這么加,是因?yàn)?#xff0c;我有三個(gè)用戶,
這里,大家根據(jù)自己的實(shí)情去增加,修改完之后,一定要重啟sbin/start-all.sh,就能解決問題了。
?
?
?
?
?
?
問題十六
無法為用戶hue創(chuàng)建主目錄, 無法為用戶hadoop創(chuàng)建主目錄,無法為用戶hdfs創(chuàng)建主目錄。
?
?
解決辦法
在$hadoop/etc/hadoop/core-site.xml?
<property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property>?
?
比如,以下是無法為用戶hdfs創(chuàng)建主目錄,則
<property><name>hadoop.proxyuser.hdfs.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.hdfs.groups</name><value>*</value></property>成功!
?
?
?
?
?
?
問題十七
??User [hue] not defined as proxyuser
問題來源:
The oozie is running, when I click WorkFlows, there appares a error
即,oozie啟動(dòng)后,當(dāng)我在hue界面點(diǎn)擊workfolws后,出現(xiàn)如下的錯(cuò)誤
?
?
解決辦法
Hue submits MapReduce jobs to Oozie as the logged in user. You need to configure Oozie to accept the?hue?user to be a proxyuser. Specify this in your?oozie-site.xml?(even in a non-secure cluster), and restart Oozie:
<property><name>oozie.service.ProxyUserService.proxyuser.hue.hosts</name><value>*</value> </property> <property><name>oozie.service.ProxyUserService.proxyuser.hue.groups</name><value>*</value> </property>參考: http://archive.cloudera.com/cdh4/cdh/4/hue-2.0.0-cdh4.0.1/manual.html
?
即,在oozie-site.xml配置文件里,加入
?
?
添加好之后,然后重啟oozie。記得先jps下進(jìn)程,Kill掉,再重啟。
? 注意:/home/hadoop/app/oozie是我安裝oozie的目錄。
?
? 或者執(zhí)行以下命令重啟也是可以的。
[hadoop@bigdatamaster oozie]$ pwd /home/hadoop/app/oozie [hadoop@bigdatamaster oozie]$ bin/oozied.sh restart?
然后,得到
?
?
?
?
?
?
?
問題十八
? Oozie 服務(wù)器未運(yùn)行
?
?
? 解決辦法
Oozie的詳細(xì)啟動(dòng)步驟(CDH版本的3節(jié)點(diǎn)集群)
?
?
?
?
問題十九
?Api 錯(cuò)誤:('Connection aborted.', error(111, 'Connection refused'))
?
?
? 解決辦法
?
?
?
?
問題二十
??Could not connect to bigdatamaster:21050
?
? 解決辦法
? 開啟impala服務(wù)
?
?
?
?
?
問題二十一:
OperationalError: (2003, "Can't connect to MySQL server on 'bigdatamaster' (111)")?
解決辦法
[root@bigdatamaster hadoop]# service mysqld start Starting mysqld: [ OK ] [root@bigdatamaster hadoop]#再刷新即可。
問題二十二:
Hue執(zhí)行./build/env/bin/supervisor出現(xiàn) IOError: [Errno 13] Permission denied: '/opt/modules/hue-3.9.0-cdh5.5.0/logs/supervisor.log File "./build/env/bin/supervisor", line 9 [kfk@bigdata-pro01 hue-3.9.0-cdh5.5.0]$ ./build/env/bin/supervisor Traceback (most recent call last):File "./build/env/bin/supervisor", line 9, in <module>load_entry_point('desktop==3.9.0', 'console_scripts', 'supervisor')()File "/opt/modules/hue-3.9.0-cdh5.5.0/desktop/core/src/desktop/supervisor.py", line 358, in main_init_log(log_dir)File "/opt/modules/hue-3.9.0-cdh5.5.0/desktop/core/src/desktop/supervisor.py", line 294, in _init_logdesktop.log.basic_logging(PROC_NAME, log_dir)File "/opt/modules/hue-3.9.0-cdh5.5.0/desktop/core/src/desktop/log/__init__.py", line 146, in basic_logginglogging.config.fileConfig(log_conf)File "/usr/lib64/python2.6/logging/config.py", line 84, in fileConfighandlers = _install_handlers(cp, formatters)File "/usr/lib64/python2.6/logging/config.py", line 162, in _install_handlersh = klass(*args)File "/usr/lib64/python2.6/logging/handlers.py", line 112, in __init__BaseRotatingHandler.__init__(self, filename, mode, encoding, delay)File "/usr/lib64/python2.6/logging/handlers.py", line 64, in __init__logging.FileHandler.__init__(self, filename, mode, encoding, delay)File "/usr/lib64/python2.6/logging/__init__.py", line 835, in __init__StreamHandler.__init__(self, self._open())File "/usr/lib64/python2.6/logging/__init__.py", line 854, in _openstream = open(self.baseFilename, self.mode) IOError: [Errno 13] Permission denied: '/opt/modules/hue-3.9.0-cdh5.5.0/logs/supervisor.log'
?
解決辦法:
只有build是root權(quán)限,其他都是普通用戶。
?
問題二十三:
當(dāng)前沒有已配置的數(shù)據(jù)庫(kù)。請(qǐng)轉(zhuǎn)到您的 Hue 配置并在“rdbms”部分下方添加數(shù)據(jù)庫(kù)。
?
以下是,默認(rèn)的 ########################################################################### # Settings for the RDBMS application ###########################################################################[librdbms]# The RDBMS app can have any number of databases configured in the databases# section. A database is known by its section name# (IE sqlite, mysql, psql, and oracle in the list below).[[databases]]# sqlite configuration.## [[[sqlite]]]# Name to show in the UI.## nice_name=SQLite# For SQLite, name defines the path to the database.## name=/tmp/sqlite.db# Database backend to use.## engine=sqlite# Database options to send to the server when connecting.# https://docs.djangoproject.com/en/1.4/ref/databases/## options={}# mysql, oracle, or postgresql configuration.## [[[mysql]]]# Name to show in the UI.## nice_name="My SQL DB"# For MySQL and PostgreSQL, name is the name of the database.# For Oracle, Name is instance of the Oracle server. For express edition# this is 'xe' by default.## name=mysqldb# Database backend to use. This can be:# 1. mysql# 2. postgresql# 3. oracle## engine=mysql# IP or hostname of the database to connect to.## host=localhost# Port the database server is listening to. Defaults are:# 1. MySQL: 3306# 2. PostgreSQL: 5432# 3. Oracle Express Edition: 1521## port=3306# Username to authenticate with when connecting to the database.## user=example# Password matching the username to authenticate with when# connecting to the database.## password=example# Database options to send to the server when connecting.# https://docs.djangoproject.com/en/1.4/ref/databases/## options={}
?
很有可能,你在這個(gè)地方,沒有細(xì)致地更改。
?
改為 # sqlite configuration.[[[sqlite]]]# Name to show in the UI.nice_name=SQLite# For SQLite, name defines the path to the database.name=/opt/modules/hue-3.9.0-cdh5.5.0/desktop/desktop.db# Database backend to use.engine=sqlite# Database options to send to the server when connecting.# https://docs.djangoproject.com/en/1.4/ref/databases/## options={}
?
?
?
? 再,停掉mysql,重啟mysql和hue
[kfk@bigdata-pro01 conf]$ sudo service mysqld restart Stopping mysqld: [ OK ] Starting mysqld: [ OK ] [kfk@bigdata-pro01 conf]$?
?
[kfk@bigdata-pro01 hue-3.9.0-cdh5.5.0]$ ./build/env/bin/supervisor [INFO] Not running as root, skipping privilege drop starting server with options: {'daemonize': False,'host': 'bigdata-pro01.kfk.com','pidfile': None,'port': 8888,'server_group': 'hue','server_name': 'localhost','server_user': 'hue','ssl_certificate': None,'ssl_cipher_list': 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA','ssl_private_key': None,'threads': 40,'workdir': None}
?
?
?
?
?
?
?
?
?
即,Hue的服務(wù)器(名字為My SQL DB,一般是mysql, oracle, or postgresql)。通常用mysql。
那么,它的數(shù)據(jù)庫(kù)是metastore,是在hive里新建的mysql來充當(dāng)它的元數(shù)據(jù)庫(kù)。
<property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:mysql://master:3306/metastore?createDatabaseIfNotExist=true</value></property>?
?
?
Hue的服務(wù)器(名字為SQLite),那么它的數(shù)據(jù)庫(kù)是/opt/modules/hue-3.9.0-cdh5.5.0/desktop/desktop.db
問題二十四:
hue? HBase Thrift 1 server cannot be contacted:? 9090
?
問題二十五: Api Error: Invalid method name: 'getTableNames'
產(chǎn)生的原因
后來通過查詢資料,懷疑是客戶端thrift版本和hbase thrift server的thrift版本不一致造成的。
果然thrift server上是使用的thrift2啟動(dòng)的,而客戶端使用的是thrift訪問的。
解決方法
因?yàn)楦驹蛟谟诳蛻舳撕头?wù)器thrift版本不一致,那么解決方法有兩個(gè):
服務(wù)端以啟動(dòng)thrift版本的thrift server
hbase 的 thrift server以thrift1方式啟動(dòng)。
# hbase-daemon.sh stop thrift2? ? ? ?#啟動(dòng)命令 hbase-daemon.sh start thrift
?
問題二十六: socket.error: [Errno 98] Address already in use
出現(xiàn)這個(gè)問題的原因是:
在于,當(dāng)hue還在運(yùn)行,你也許在命令行里進(jìn)行了如hue.ini的修改,然后沒正常關(guān)閉Hue再開啟。
?
?
?
問題二十七:
?
?
? ??
二十八
desktop_settings' doesn't exist
然后,重啟hue進(jìn)程
[hadoop@bigdatamaster hue]$ build/env/bin/supervisor?
?完成以上的這個(gè)配置,啟動(dòng)Hue,通過瀏覽器訪問,會(huì)發(fā)生錯(cuò)誤,原因是mysql數(shù)據(jù)庫(kù)沒有被初始化
DatabaseError: (1146, "Table 'hue.desktop_settings' doesn't exist")
或者
ProgrammingError: (1146, "Table 'hive.django_session' doesn't exist")?
?
? 初始化數(shù)據(jù)庫(kù)
/home/hadoop/app/hue-3.9.0-cdh5.5.4/build/env bin/hue syncdb bin/hue migrate執(zhí)行完以后,可以在mysql中看到,hue相應(yīng)的表已經(jīng)生成。
啟動(dòng)hue, 能夠正常訪問了。
?
?
?
?
?
二十九:
build/env/bin/supervisor? 執(zhí)行時(shí),出現(xiàn) No such file or diretory
?
? ? ?解決辦法:
? ? ? ? ? ?如果你實(shí)在不會(huì)安裝,或者說搞不定安裝Hue,則就去別人機(jī)器上拷貝一個(gè)已經(jīng)安裝好的Hue。
? ? ? ? ? ?放心,是可以放到你的Apache集群或者CDH集群里的,作者親身經(jīng)歷過。
? ?
? ? ? ? ? 只是,需要自己手動(dòng),將軟連接重新做一遍,這個(gè)不難的,刪除,再重新ln -s 即可嘛。
?
?
?
三十 Hue界面登錄的密碼忘記了??
如我剛開始,用戶是hadoop,密碼也是hadoop。 現(xiàn)在,我想改為hue,密碼也是hue。
?
?
此時(shí),用戶名是由hadoop,改為hue了,但是密碼還沒改過來,別急。?
?
?
?
?
?
?
?
?
?
?
歡迎大家,加入我的微信公眾號(hào):大數(shù)據(jù)躺過的坑? ? ? ? 人工智能躺過的坑 ?
同時(shí),大家可以關(guān)注我的個(gè)人博客:
???http://www.cnblogs.com/zlslch/???和? ???http://www.cnblogs.com/lchzls/? ????http://www.cnblogs.com/sunnyDream/? ?
???詳情請(qǐng)見:http://www.cnblogs.com/zlslch/p/7473861.html
?
人生苦短,我愿分享。本公眾號(hào)將秉持活到老學(xué)到老學(xué)習(xí)無休止的交流分享開源精神,匯聚于互聯(lián)網(wǎng)和個(gè)人學(xué)習(xí)工作的精華干貨知識(shí),一切來于互聯(lián)網(wǎng),反饋回互聯(lián)網(wǎng)。
目前研究領(lǐng)域:大數(shù)據(jù)、機(jī)器學(xué)習(xí)、深度學(xué)習(xí)、人工智能、數(shù)據(jù)挖掘、數(shù)據(jù)分析。 語言涉及:Java、Scala、Python、Shell、Linux等 。同時(shí)還涉及平常所使用的手機(jī)、電腦和互聯(lián)網(wǎng)上的使用技巧、問題和實(shí)用軟件。 只要你一直關(guān)注和呆在群里,每天必須有收獲
?
? ? ? 對(duì)應(yīng)本平臺(tái)的討論和答疑QQ群:大數(shù)據(jù)和人工智能躺過的坑(總?cè)?#xff09;(161156071)?
?
?
?
?
?
?
?
?
?
?
?
?
轉(zhuǎn)載于:https://www.cnblogs.com/zlslch/p/6819622.html
總結(jié)
以上是生活随笔為你收集整理的安装Hue后的一些功能的问题解决干货总结(博主推荐)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: BUPT 2012复试机考 4T
- 下一篇: OpenStack开发学习笔记01