Docker 安装 Hue 4.10.0

一、安装步骤

https://blog.csdn.net/hunheidaode/article/details/121961950

1、下载镜像:直接启动:

 docker run -it -p 8888:8888 gethue/hue:latest

2.当容器起来以后复制里面的配置文件到 /opt/module/hue下 并进行修改:

 [root@homaybd03 conf]# docker ps
CONTAINER ID   IMAGE                                    COMMAND                  CREATED          STATUS         PORTS                                                                     NAMES
6f6a7d43d9ad   gethue/hue:latest                        "./startup.sh"           10 minutes ago   Up 7 minutes   0.0.0.0:8888->8888/tcp, :::8888->8888/tcp                                 hue

 [root@node2 ~]# docker cp 6f6a7d43d9ad:/usr/share/hue/desktop/conf /opt/module/hue/
[root@node2 ~]# 

3、修改hue.ini 配置文件:

修改MySQL和Hive配置

[root@homaybd03 conf]# cat hue.ini 
# Hue configuration file
# ===================================
#
# For complete documentation about the contents of this file, check
#   https://docs.gethue.com/administrator/configuration/
#
# All .ini files under the current directory are treated equally. Their
# contents are merged to form the Hue configuration, which can
# can be viewed on the Hue at
#   http://<hue_host>:<port>/dump_config

###########################################################################
# General configuration for API (authentication, etc)
###########################################################################

[desktop]

# Set this to a random string, the longer the better.
# This is used for secure hashing in the session store.
secret_key=

# Execute this script to produce the Django secret key. This will be used when
# 'secret_key' is not set.
## secret_key_script=

# Webserver listens on this address and port
http_host=0.0.0.0
http_port=8888

# A comma-separated list of available Hue load balancers
## hue_load_balancer=

# Time zone name
time_zone=America/Los_Angeles

# Enable or disable debug mode.
django_debug_mode=false

# Enable development mode, where notably static files are not cached.
## dev=false

# Enable or disable database debug mode.
## database_logging=false

# Whether to send debug messages from JavaScript to the server logs.
## send_dbug_messages=false

# Enable or disable backtrace for server error
http_500_debug_mode=false

# Enable or disable instrumentation. If django_debug_mode is True, this is automatically enabled
## instrumentation=false

# Server email for internal error messages
## django_server_email='hue@localhost.localdomain'

# Email backend
## django_email_backend=django.core.mail.backends.smtp.EmailBackend

# Set to true to use CherryPy as the webserver, set to false
# to use Gunicorn as the webserver. Defaults to CherryPy if
# key is not specified.
## use_cherrypy_server=true

# Gunicorn work class: gevent or evenlet, gthread or sync.
## gunicorn_work_class=eventlet

# The number of Gunicorn worker processes. If not specified, it uses: (number of CPU * 2) + 1.
## gunicorn_number_of_workers=1

# Webserver runs as this user
## server_user=hue
## server_group=hue

# This should be the Hue admin and proxy user
## default_user=hue

# This should be the hadoop cluster admin
## default_hdfs_superuser=hdfs

# If set to false, runcpserver will not actually start the web server.
# Used if Apache is being used as a WSGI container.
## enable_server=yes

# Number of threads used by the CherryPy web server
## cherrypy_server_threads=50

# This property specifies the maximum size of the receive buffer in bytes in thrift sasl communication,
# default value is 2097152 (2 MB), which equals to (2 * 1024 * 1024)
## sasl_max_buffer=2097152

# Hue will try to get the actual host of the Service, even if it resides behind a load balancer.
# This will enable an automatic configuration of the service without requiring custom configuration of the service load balancer.
# This is available for the Impala service only currently. It is highly recommended to only point to a series of coordinator-only nodes only.
# enable_smart_thrift_pool=false

# Filename of SSL Certificate
## ssl_certificate=

# Filename of SSL RSA Private Key
## ssl_private_key=

# Filename of SSL Certificate Chain
## ssl_certificate_chain=

# SSL certificate password
## ssl_password=

# Execute this script to produce the SSL password. This will be used when 'ssl_password' is not set.
## ssl_password_script=

# Disable all renegotiation in TLSv1.2 and earlier. Do not send HelloRequest messages, and ignore renegotiation requests via ClientHello. This option is only available with OpenSSL 1.1.0h and later and python 3.7
## ssl_no_renegotiation=python.version >= 3.7

# X-Content-Type-Options: nosniff This is a HTTP response header feature that helps prevent attacks based on MIME-type confusion.
## secure_content_type_nosniff=true

# X-Xss-Protection: \"1; mode=block\" This is a HTTP response header feature to force XSS protection.
## secure_browser_xss_filter=true

# X-Content-Type-Options: nosniff This is a HTTP response header feature that helps prevent attacks based on MIME-type confusion.
## secure_content_security_policy="script-src 'self' 'unsafe-inline' 'unsafe-eval' *.google-analytics.com *.doubleclick.net data:;img-src 'self' *.google-analytics.com *.doubleclick.net http://*.tile.osm.org *.tile.osm.org *.gstatic.com data:;style-src 'self' 'unsafe-inline' fonts.googleapis.com;connect-src 'self';frame-src *;child-src 'self' data: *.vimeo.com;object-src 'none'"

# Strict-Transport-Security HTTP Strict Transport Security(HSTS) is a policy which is communicated by the server to the user agent via HTTP response header field name "Strict-Transport-Security". HSTS policy specifies a period of time during which the user agent(browser) should only access the server in a secure fashion(https).
## secure_ssl_redirect=False
## secure_redirect_host=0.0.0.0
## secure_redirect_exempt=[]
## secure_hsts_seconds=31536000
## secure_hsts_include_subdomains=true

# List of allowed and disallowed ciphers in cipher list format.
# See http://www.openssl.org/docs/apps/ciphers.html for more information on
# cipher list format. This list is from
# https://wiki.mozilla.org/Security/Server_Side_TLS v3.7 intermediate
# recommendation, which should be compatible with Firefox 1, Chrome 1, IE 7,
# Opera 5 and Safari 1.
## ssl_cipher_list=ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA

# Path to default Certificate Authority certificates.
## ssl_cacerts=/etc/hue/cacerts.pem

# Choose whether Hue should validate certificates received from the server.
## ssl_validate=true

# Default LDAP/PAM/.. username and password of the hue user used for authentications with other services.
# Inactive if password is empty.
# e.g. LDAP pass-through authentication for HiveServer2 or Impala. Apps can override them individually.
## auth_username=hue
## auth_password=

# Default encoding for site data
## default_site_encoding=utf-8

# Help improve Hue with anonymous usage analytics.
# Use Google Analytics to see how many times an application or specific section of an application is used, nothing more.
## collect_usage=true

# Tile layer server URL for the Leaflet map charts
# Read more on http://leafletjs.com/reference.html#tilelayer
# Make sure you add the tile domain to the img-src section of the 'secure_content_security_policy' configuration parameter as well.
## leaflet_tile_layer=http://{s}.tile.osm.org/{z}/{x}/{y}.png

# The copyright message for the specified Leaflet maps Tile Layer
## leaflet_tile_layer_attribution='&copy; <a href="http://osm.org/copyright">OpenStreetMap</a> contributors'

# All the map options accordingly to http://leafletjs.com/reference-0.7.7.html#map-options
# To change CRS, just use the name, ie. "EPSG4326"
## leaflet_map_options='{}'

# All the tile layer options, accordingly to http://leafletjs.com/reference-0.7.7.html#tilelayer
## leaflet_tile_layer_options='{}'

# X-Frame-Options HTTP header value. Use 'DENY' to deny framing completely
## http_x_frame_options=SAMEORIGIN

# Enable X-Forwarded-Host header if the load balancer requires it.
## use_x_forwarded_host=true

# Support for HTTPS termination at the load-balancer level with SECURE_PROXY_SSL_HEADER.
## secure_proxy_ssl_header=false

# Comma-separated list of Django middleware classes to use.
# See https://docs.djangoproject.com/en/1.4/ref/middleware/ for more details on middlewares in Django.
## middleware=desktop.auth.backend.LdapSynchronizationBackend

# Comma-separated list of regular expressions, which match the redirect URL.
# For example, to restrict to your local domain and FQDN, the following value can be used:
# ^\/.*$,^http:\/\/www.mydomain.com\/.*$
## redirect_whitelist=^(\/[a-zA-Z0-9]+.*|\/)$
.
.
.
省略相关配置

# Configuration options for specifying the Desktop Database. For more info,
# see http://docs.djangoproject.com/en/1.11/ref/settings/#database-engine
# ------------------------------------------------------------------------
[[database]]
# Database engine is typically one of:
# postgresql_psycopg2, mysql, sqlite3 or oracle.
#
# Note that for sqlite3, 'name', below is a path to the filename. For other backends, it is the database name
# Note for Oracle, options={"threaded":true} must be set in order to avoid crashes.
# Note for Oracle, you can use the Oracle Service Name by setting "host=" and "port=" and then "name=<host>:<port>/<service_name>".
# Note for MariaDB use the 'mysql' engine.
engine=mysql
host=192.168.1.125
port=3306
user=root
password=homaytech
name=hue
# conn_max_age option to make database connection persistent value in seconds
# https://docs.djangoproject.com/en/1.11/ref/databases/#persistent-connections
## conn_max_age=0
# Execute this script to produce the database password. This will be used when 'password' is not set.
## password_script=/path/script
## name=desktop/desktop.db
## options={}
# Database schema, to be used only when public schema is revoked in postgres
## schema=public

# Configuration options for specifying the Desktop session.
# For more info, see https://docs.djangoproject.com/en/1.4/topics/http/sessions/
# ------------------------------------------------------------------------
[[session]]
# The name of the cookie to use for sessions.
# This can have any value that is not used by the other cookie names in your application.
## cookie_name=sessionid

# Configuration to determine whether test cookie should be added determine whether the user's browser supports cookies
# Should be disabled if django_session table is growing rapidly , Default value is true
## enable_test_cookie=true

# The cookie containing the users' session ID will expire after this amount of time in seconds.
# Default is 2 weeks.
## ttl=1209600

# The cookie containing the users' session ID and csrf cookie will be secure.
# Should only be enabled with HTTPS.
## secure=false

# The cookie containing the users' session ID and csrf cookie will use the HTTP only flag.
## http_only=true

# Use session-length cookies. Logs out the user when she closes the browser window.
## expire_at_browser_close=false

# If set, limits the number of concurrent user sessions. 1 represents 1 browser session per user. Default: 0 (unlimited sessions per user)
## concurrent_user_session_limit=0

# A list of hosts which are trusted origins for unsafe requests. See django's CSRF_TRUSTED_ORIGINS for more information
## trusted_origins=.cloudera.com

# Configuration options for connecting to an external SMTP server
# ------------------------------------------------------------------------
[[smtp]]

# The SMTP server information for email notification delivery
host=localhost
port=25
user=
password=

# Whether to use a TLS (secure) connection when talking to the SMTP server
tls=no

# Default email address to use for various automated notification from Hue
## default_from_email=hue@localhost

# Configuration options for KNOX integration for secured CDPD cluster
# ------------------------------------------------------------------------
[[knox]]

# This is a list of hosts that knox proxy requests can come from
## knox_proxyhosts=server1.domain.com,server2.domain.com
# List of Kerberos principal name which is allowed to impersonate others
## knox_principal=knox1,knox2
# Comma separated list of strings representing the ports that the Hue server can trust as knox port.
## knox_ports=80,8443

# Configuration options for Kerberos integration for secured Hadoop clusters
# ------------------------------------------------------------------------
[[kerberos]]

# Path to Hue's Kerberos keytab file
## hue_keytab=
# Kerberos principal name for Hue
## hue_principal=hue/hostname.foo.com
# Frequency in seconds with which Hue will renew its keytab
## REINIT_FREQUENCY=3600
# Path to keep Kerberos credentials cached
## ccache_path=/var/run/hue/hue_krb5_ccache
# Path to kinit
## kinit_path=/path/to/kinit
# Set to false if renew_lifetime in krb5.conf is set to 0m
## krb5_renewlifetime_enabled=true

# Mutual authentication from the server, attaches HTTP GSSAPI/Kerberos Authentication to the given Request object
## mutual_authentication="OPTIONAL" or "REQUIRED" or "DISABLED"

# Configuration options for using OAuthBackend (Core) login
# ------------------------------------------------------------------------
[[oauth]]
# The Consumer key of the application
## consumer_key=XXXXXXXXXXXXXXXXXXXXX

# The Consumer secret of the application
## consumer_secret=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

# The Request token URL
## request_token_url=https://api.twitter.com/oauth/request_token

# The Access token URL
## access_token_url=https://api.twitter.com/oauth/access_token

# The Authorize URL
## authenticate_url=https://api.twitter.com/oauth/authorize

# Configuration options for using OIDCBackend (Core) login for SSO
# ------------------------------------------------------------------------
[[oidc]]
# The client ID as relay party set in OpenID provider
## oidc_rp_client_id=XXXXXXXXXXXXXXXXXXXXX

# The client secret as relay party set in OpenID provider
## oidc_rp_client_secret=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

# The OpenID provider authoriation endpoint
## oidc_op_authorization_endpoint=https://keycloak.example.com/auth/realms/Cloudera/protocol/openid-connect/auth

# The OpenID provider token endpoint
## oidc_op_token_endpoint=https://keycloak.example.com/auth/realms/cloudera/protocol/openid-connect/token

# The OpenID provider user info endpoint
## oidc_op_user_endpoint=https://keycloak.example.com/auth/realms/cloudera/protocol/openid-connect/userinfo

# The OpenID provider signing key in PEM or DER format
## oidc_rp_idp_sign_key=/path/to/key_file

# The OpenID provider authoriation endpoint
## oidc_op_jwks_endpoint=https://keycloak.example.com/auth/realms/Cloudera/protocol/openid-connect/certs

# Whether Hue as OpenID Connect client verify SSL cert
## oidc_verify_ssl=true

# As relay party Hue URL path to redirect to after login
## login_redirect_url=https://localhost:8888/oidc/callback/

# The OpenID provider URL path to redirect to after logout
## logout_redirect_url=https://keycloak.example.com/auth/realms/cloudera/protocol/openid-connect/logout

# As relay party Hue URL path to redirect to after login
## login_redirect_url_failure=https://localhost:8888/hue/oidc_failed/

# Create a new user from OpenID Connect on login if it doesn't exist
## create_users_on_login=true

# When creating a new user, which 'claims' attribute from the OIDC provider to be used for creating the username.
#      Default to 'preferred_username'. Possible values include: 'email'
## oidc_username_attribute=preferred_username

# The group of users will be created and updated as superuser. To use this feature, setup in Keycloak:
# 1. add the name of the group here
# 2. in Keycloak, go to your_realm --> your_clients --> Mappers, add a mapper
#      Mapper Type: Group Membership (this is predefined mapper type)
#      Token Claim Name: group_membership (required exact string)
## superuser_group=hue_superusers

# Configuration options for Metrics
# ------------------------------------------------------------------------
[[metrics]]

# Enable the metrics URL "/desktop/metrics"
## enable_web_metrics=True

# If specified, Hue will write metrics to this file.
## location=/var/log/hue/metrics.json

# Time in milliseconds on how frequently to collect metrics
## collection_interval=30000

# One entry for each type of snippet.
[[interpreters]]
# Define the name and how to connect and execute the language.
# https://docs.gethue.com/administrator/configuration/editor/
[[[mysql]]]
  name = MySQL
  interface=sqlalchemy
#   ## https://docs.sqlalchemy.org/en/latest/dialects/mysql.html
   options='{"url": "mysql://root:homaytech@192.168.1.125:3306/hue"}'
#   ## options='{"url": "mysql://${USER}:${PASSWORD}@localhost:3306/hue"}'

 [[[hive]]]
   name=Hive
   interface=hiveserver2

# [[[llap]]]
#   name=LLAP
#   interface=hiveserver2

# [[[impala]]]
#   name=Impala
#   interface=hiveserver2

# [[[postgresql]]]
#   name = postgresql
#   interface=sqlalchemy
#   options='{"url": "postgresql://hue:hue@host:5432/hue"}'

# [[[druid]]]
#   name = Druid
#   interface=sqlalchemy
#   options='{"url": "druid://host:8082/druid/v2/sql/"}'

# [[[sparksql]]]
#   name = Spark Sql
#   interface=sqlalchemy
#   options='{"url": "hive://user:password@localhost:10000/database"}'

# [[[sparksql]]]
#   name=SparkSql
#   interface=livy

# [[[spark]]]
#   name=Scala
#   interface=livy

# [[[pyspark]]]
#   name=PySpark
#   interface=livy

# [[[r]]]
#   name=R
#   interface=livy

# [[jar]]]
#   name=Spark Submit Jar
#   interface=livy-batch

# [[[java]]]
#   name=Java
#   interface=oozie

# [[[spark2]]]
#   name=Spark
#   interface=oozie

# [[[mapreduce]]]
#   name=MapReduce
#   interface=oozie

# [[[sqoop1]]]
#   name=Sqoop1
#   interface=oozie

# [[[distcp]]]
#   name=Distcp
#   interface=oozie

# [[[shell]]]
#   name=Shell
#   interface=oozie

# [[[dasksql]]]
# name=Dask-SQL
# interface=sqlalchemy
# ## Specific options for connecting to the dask-sql server.
# ## Please note, that dask-sql uses the presto protocol.
# # options='{"url": "presto://localhost:8080/catalog/default"}'

# [[[clickhouse]]]
# name=ClickHouse
# interface=sqlalchemy
# e.g. clickhouse://user:password@example.com:8124/test?protocol=https
# options='{"url": "clickhouse://localhost:8123"}'

# [[[vertica]]]
# name=Vertica
# interface=jdbc
# ## Specific options for connecting to a Vertica server.
# ## The JDBC driver vertica-jdbc-*.jar and its related jars need to be in the CLASSPATH environment variable.
# ## If 'user' and 'password' are omitted, they will be prompted in the UI.
# options='{"url": "jdbc:vertica://localhost:5434", "driver": "com.vertica.jdbc.Driver"}'

## Define which query and table examples can be automatically setup for the available dialects.
# [[examples]]
## If installing the examples automatically at startup.
# auto_load=false
## If automatically loading the dialect example at Editor opening.
# auto_open=false
## Names of the saved queries to install. All if empty.
# queries=
## Names of the tables to install. All if empty.
# tables=

###########################################################################
# Settings to configure your Analytics Dashboards
###########################################################################

[dashboard]

# Activate the Dashboard link in the menu.
## is_enabled=true

# Activate the SQL Dashboard (beta).
## has_sql_enabled=false

# Activate the Query Builder (beta).
## has_query_builder_enabled=false

# Activate the static report layout (beta).
## has_report_enabled=false

# Activate the new grid layout system.
## use_gridster=true

# Activate the widget filter and comparison (beta).
## has_widget_filter=false

# Activate the tree widget (to drill down fields as dimensions, alpha).
## has_tree_widget=false

# Setting this value to true opens up for possible xss attacks.
## allow_unsecure_html=false

[[engines]]

#  [[[solr]]]
#  Requires Solr 6+
##  analytics=true
##  nesting=false

#  [[[sql]]]
##  analytics=true
##  nesting=false

###########################################################################
# Settings to configure your Hadoop cluster.
###########################################################################

[hadoop]

# Configuration for HDFS NameNode
# ------------------------------------------------------------------------
[[hdfs_clusters]]
  # HA support by using HttpFs

[[[default]]]
# Enter the filesystem uri
fs_defaultfs=hdfs://localhost:8020

# NameNode logical name.
## logical_name=

# Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is 14000 for HttpFs.
## webhdfs_url=http://localhost:50070/webhdfs/v1

# Change this if your HDFS cluster is Kerberos-secured
## security_enabled=false

# In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
# have to be verified against certificate authority
## ssl_cert_ca_verify=True

# Directory of the Hadoop configuration
## hadoop_conf_dir=$HADOOP_CONF_DIR when set or '/etc/hadoop/conf'

# Whether Hue should list this HDFS cluster. For historical reason there is no way to disable HDFS.
## is_enabled=true

# Configuration for YARN (MR2)
# ------------------------------------------------------------------------
[[yarn_clusters]]

[[[default]]]
# Enter the host on which you are running the ResourceManager
## resourcemanager_host=localhost

# The port where the ResourceManager IPC listens on
## resourcemanager_port=8032

# Whether to submit jobs to this cluster
submit_to=True

# Resource Manager logical name (required for HA)
## logical_name=

# Change this if your YARN cluster is Kerberos-secured
## security_enabled=false

# URL of the ResourceManager API
## resourcemanager_api_url=http://localhost:8088

# URL of the ProxyServer API
## proxy_api_url=http://localhost:8088

###########################################################################
# Settings to configure Beeswax with Hive
###########################################################################

[beeswax]

# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
 hive_server_host=192.168.1.124

# Binary thrift port for HiveServer2.
 hive_server_port=10000

# Http thrift port for HiveServer2.
## hive_server_http_port=10001

# Host where LLAP is running
## llap_server_host = localhost

# LLAP binary thrift port
## llap_server_port = 10500

# LLAP HTTP Thrift port
## llap_server_thrift_port = 10501

# Alternatively, use Service Discovery for LLAP (Hive Server Interactive) and/or Hiveserver2, this will override server and thrift port

# Whether to use Service Discovery for LLAP
## hive_discovery_llap = true

# is llap (hive server interactive) running in an HA configuration (more than 1)
# important as the zookeeper structure is different
## hive_discovery_llap_ha = false

# Shortcuts to finding LLAP znode Key
# Non-HA - hiveserver-interactive-site - hive.server2.zookeeper.namespace ex hive2 = /hive2
# HA-NonKerberized - <llap_app_name>_llap ex app name llap0 = /llap0_llap
# HA-Kerberized - <llap_app_name>_llap-sasl ex app name llap0 = /llap0_llap-sasl
## hive_discovery_llap_znode = /hiveserver2-hive2

# Whether to use Service Discovery for HiveServer2
## hive_discovery_hs2 = true

# Hiveserver2 is hive-site hive.server2.zookeeper.namespace ex hiveserver2 = /hiverserver2
## hive_discovery_hiveserver2_znode = /hiveserver2

# Applicable only for LLAP HA
# To keep the load on zookeeper to a minimum
# ---- we cache the LLAP activeEndpoint for the cache_timeout period
# ---- we cache the hiveserver2 endpoint for the length of session
# configurations to set the time between zookeeper checks
## cache_timeout = 60

# Host where Hive Metastore Server (HMS) is running.
# If Kerberos security is enabled, the fully-qualified domain name (FQDN) is required.
## hive_metastore_host=localhost

# Configure the port the Hive Metastore Server runs on.
## hive_metastore_port=9083

# Hive configuration directory, where hive-site.xml is located
## hive_conf_dir=/etc/hive/conf

# Timeout in seconds for thrift calls to Hive service
## server_conn_timeout=120

# Choose whether to use the old GetLog() thrift call from before Hive 0.14 to retrieve the logs.
# If false, use the FetchResults() thrift call from Hive 1.0 or more instead.
## use_get_log_api=false

# Limit the number of partitions that can be listed.
## list_partitions_limit=10000

# The maximum number of partitions that will be included in the SELECT * LIMIT sample query for partitioned tables.
## query_partitions_limit=10

# A limit to the number of rows that can be downloaded from a query before it is truncated.
# A value of -1 means there will be no limit.
## download_row_limit=100000

# A limit to the number of bytes that can be downloaded from a query before it is truncated.
# A value of -1 means there will be no limit.
## download_bytes_limit=-1

# Hue will try to close the Hive query when the user leaves the editor page.
# This will free all the query resources in HiveServer2, but also make its results inaccessible.
## close_queries=false

# Hue will use at most this many HiveServer2 sessions per user at a time.
# For Tez, increase the number to more if you need more than one query at the time, e.g. 2 or 3 (Tez has a maximum of 1 query by session).
# -1 is unlimited number of sessions.
## max_number_of_sessions=1

# When set to True, Hue will close sessions created for background queries and open new ones as needed.
# When set to False, Hue will keep sessions created for background queries opened and reuse them as needed.
# This flag is useful when max_number_of_sessions != 1
## close_sessions=max_number_of_sessions != 1

# Thrift version to use when communicating with HiveServer2.
# Version 11 comes with Hive 3.0. If issues, try 7.
## thrift_version=11

# A comma-separated list of white-listed Hive configuration properties that users are authorized to set.
## config_whitelist=hive.map.aggr,hive.exec.compress.output,hive.exec.parallel,hive.execution.engine,mapreduce.job.queuename

# Override the default desktop username and password of the hue user used for authentications with other services.
# e.g. Used for LDAP/PAM pass-through authentication.
## auth_username=hue
## auth_password=

# Use SASL framework to establish connection to host.
## use_sasl=false

# Enable the HPLSQL mode.
## hplsql=false

# Max number of objects (columns, tables, databases) available to list in the left assist, autocomplete, table browser etc.
# Setting this higher than the default can degrade performance.
## max_catalog_sql_entries=5000

[[ssl]]
# Path to Certificate Authority certificates.
## cacerts=/etc/hue/cacerts.pem

# Choose whether Hue should validate certificates received from the server.
## validate=true

###########################################################################
# Settings for the User Admin application
###########################################################################

[useradmin]
# Default home directory permissions
## home_dir_permissions=0755

# Disable to use umask from hdfs else new user home directory would be created with the permissions from home_dir_permissions
## use_home_dir_permissions=true

# The name of the default user group that users will be a member of
## default_user_group=default

[[password_policy]]
# Set password policy to all users. The default policy requires password to be at least 8 characters long,
# and contain both uppercase and lowercase letters, numbers, and special characters.

## is_enabled=false
## pwd_regex="^(?=.*?[A-Z])(?=(.*[a-z]){1,})(?=(.*[\d]){1,})(?=(.*[\W_]){1,}).{8,}$"
## pwd_hint="The password must be at least 8 characters long, and must contain both uppercase and lowercase letters, at least one number, and at least one special character."
## pwd_error_message="The password must be at least 8 characters long, and must contain both uppercase and lowercase letters, at least one number, and at least one special character."

###########################################################################
# Settings to configure liboozie
###########################################################################

[liboozie]
# The URL where the Oozie service runs on. This is required in order for
# users to submit jobs. Empty value disables the config check.
## oozie_url=http://localhost:11000/oozie

# Requires FQDN in oozie_url if enabled
## security_enabled=false

# Location on HDFS where the workflows/coordinator are deployed when submitted.
## remote_deployement_dir=/user/hue/oozie/deployments

###########################################################################
# Settings for the AWS lib
###########################################################################

[aws]
# Enable the detection of an IAM role providing the credentials automatically. It can take a few seconds.
## has_iam_detection=false

[[aws_accounts]]
# Default AWS account
## [[[default]]]
# AWS credentials
## access_key_id=
## secret_access_key=
## security_token=

# Execute this script to produce the AWS access key ID.
## access_key_id_script=/path/access_key_id.sh

# Execute this script to produce the AWS secret access key.
## secret_access_key_script=/path/secret_access_key.sh

# Allow to use either environment variables or
# EC2 InstanceProfile to retrieve AWS credentials.
## allow_environment_credentials=yes

# AWS region to use, if no region is specified, will attempt to connect to standard s3.amazonaws.com endpoint
## region=us-east-1

# Endpoint overrides
## host=

# Proxy address and port
## proxy_address=
## proxy_port=8080
## proxy_user=
## proxy_pass=

# Secure connections are the default, but this can be explicitly overridden:
## is_secure=true

# The default calling format uses https://<bucket-name>.s3.amazonaws.com but
# this may not make sense if DNS is not configured in this way for custom endpoints.
# e.g. Use boto.s3.connection.OrdinaryCallingFormat for https://s3.amazonaws.com/<bucket-name>
## calling_format=boto.s3.connection.OrdinaryCallingFormat

# The time in seconds before a delegate key is expired. Used when filebrowser/redirect_download is used. Default to 4 Hours.
## key_expiry=14400

[[prometheus]]
# Configuration options for Prometheus API.
## api_url=http://localhost:9090/api
[root@homaybd03 conf]# 

4、重新启动

 docker run -it -d  -p 8888:8888 --name hue -v /opt/module/hue/conf:/usr/share/hue/desktop/conf  gethue/hue:latest

二、安装演示

安装成功后,即可访问管理后台:

http://192.168.1.123:8888/hue/dashboard/new_search?engine=hive

登录名:admin/admin

file

使用admin账号,添加新的hive用户,该hive用户可以操作hive,默认的admin没有hive操作权限操作;

账密:hive/hive

file

file


相关文章:
Docker 安装 Hue 4.10.0

为者常成,行者常至