Giter Club home page Giter Club logo

docker-hive's People

Contributors

earthquakesan avatar gmouchakis avatar martint avatar takuti avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-hive's Issues

Failed to connect to localhost:10000

Hello,

First of all, thanks for sharing your great work!

I've followed the README and everything seems to be running fine:

# docker-compose ps
                 Name                               Command               State                  Ports               
---------------------------------------------------------------------------------------------------------------------
datanode                                 /entrypoint.sh /run.sh           Up      50075/tcp                          
dockerhive_hive-metastore-postgresql_1   /docker-entrypoint.sh postgres   Up      5432/tcp                           
hive-metastore                           entrypoint.sh /opt/hive/bi ...   Up      10000/tcp, 10002/tcp               
hive-server                              entrypoint.sh /bin/sh -c s ...   Up      0.0.0.0:10000->10000/tcp, 10002/tcp
namenode                                 /entrypoint.sh /run.sh           Up      50070/tcp                
# docker ps
CONTAINER ID        IMAGE                                           COMMAND                  CREATED             STATUS                    PORTS                                 NAMES
863752049e66        dockerhive_hive-server                          "entrypoint.sh /bi..."   27 minutes ago      Up 5 minutes              0.0.0.0:10000->10000/tcp, 10002/tcp   hive-server
ba79079f48a8        dockerhive_hive-metastore                       "entrypoint.sh /op..."   27 minutes ago      Up 5 minutes              10000/tcp, 10002/tcp                  hive-metastore
4396594ad289        bde2020/hadoop-datanode:1.1.0-hadoop2.8-java8   "/entrypoint.sh /r..."   27 minutes ago      Up 27 minutes (healthy)   50075/tcp                             datanode
f78224e07505        bde2020/hadoop-namenode:1.1.0-hadoop2.8-java8   "/entrypoint.sh /r..."   27 minutes ago      Up 27 minutes (healthy)   50070/tcp                             namenode
b632b3fd642f        bde2020/hive-metastore-postgresql:2.1.0         "/docker-entrypoin..."   27 minutes ago      Up 27 minutes             5432/tcp                              dockerhive_hive-metastore-postgresql_1

But I can't connect to Hive to load data:

/opt# /opt/hive/bin/beeline -u jdbc:hive2://localhost:10000
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.8.0/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://localhost:10000
17/10/23 12:48:30 [main]: WARN jdbc.HiveConnection: Failed to connect to localhost:10000
Could not open connection to the HS2 server. Please check the server URI and if the URI is correct, then ask the administrator to check the server status.
Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
Beeline version 2.1.1 by Apache Hive

I've tried to connect to the hive-metastore container but it doesn't work either. I've also try to connect from the beeline prompt:

beeline> !connect jdbc:hive2://localhost:10000
Connecting to jdbc:hive2://localhost:10000
Enter username for jdbc:hive2://localhost:10000: 
Enter password for jdbc:hive2://localhost:10000: 
17/10/23 12:52:48 [main]: WARN jdbc.HiveConnection: Failed to connect to localhost:10000
Could not open connection to the HS2 server. Please check the server URI and if the URI is correct, then ask the administrator to check the server status.
Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
beeline> 

Did I miss something ?
Thanks

how can i post text file to HDFS

when i was post text to HDFS by java client from local:

try (TISFSDataOutputStream outputStream = fileSystem.create(p, true)) {
            org.apache.commons.io.IOUtils.write(IOUtils.loadResourceFromClasspath(DataXHdfsWriter.class
                    , "hdfs-datax-writer-assert-without-option-val.json"), outputStream, TisUTF8.get());
        }

however throw an exception form server side:

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/mozhenghua/com.qlangtech.tis.hdfs.impl.HdfsFileSystemFactory@29d80d2b/test/test could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1628)

have export an port 8082, on namenode :

    ports:
      - "50070:50070"
      - "8020:8020"

and I have found a historical issue #15, it seems that has been fixed. is there anyone can help me, thanks

Error starting the containers

Hello,

I tried to start the containers, but I got this errors. What should I do?
c:\Temp\docker-hive>docker-compose up -d
Starting docker-hive_datanode_1 ... error Starting docker-hive_hive-metastore-postgresql_1 ...
Starting docker-hive_hive-metastore_1 ...
Starting docker-hive_namenode_1 ...
Starting docker-hive_hive-server_1 ...
Starting docker-hive_presto-coordinator_1 ...
Starting docker-hive_namenode_1 ... error Starting docker-hive_hive-metastore-postgresql_1 ... done Starting docker-hive_hive-metastore_1 ... done
Starting docker-hive_hive-server_1 ... done Starting docker-hive_presto-coordinator_1 ... done
ERROR: for datanode Cannot start service datanode: Ports are not available: listen tcp 0.0.0.0:50075: bind: An attempt was made to access a socket in a way forbidden by its access permissions.

ERROR: for namenode Cannot start service namenode: Ports are not available: listen tcp 0.0.0.0:50070: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
ERROR: Encountered errors while bringing up the project.

Lack of license

Me and a colleague have used/adapted some of the Dockerfiles and config provided here.
To keep in line with regulations ("Generally speaking, the absence of a license means that the default copyright laws apply."), I'd appreciate if you could add a license to this repo.

Is there a way to (use) create table from CSV that is located on AWS S3

Hi,

I would like to know if it is possible to configure Docker Hive to be able to access CSV files stored in an S3 bucket.
In my case, I have the following example (table):

CREATE EXTERNAL TABLE test(
 sequence String, Timestampval Timestamp, chargeState String, level String, temperature String, x String, y String, z String, designation String, isCalibrated String, min String, max String, block String, margin String, state string, Child_Type string, Sub_Child_Type string, SerialNumber bigint, metertype string, currentfile string, data_date string, hour string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES 
('separatorChar'=';')
STORED AS INPUTFORMAT
 'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
 's3://useast1-dataload-prod/file_data/CurrentCondition/2024-04-30_02-04/';

Any help is appreciated

Hive CLI access

Docker Compose runs all datanode and hive
but not able to view the exposed ports 10000 and 10002

when i try http://{DOCKER_IP}:10000

Also how do i get the Hive CLI

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c55c66febca7 bde2020/hadoop-datanode:1.0.0 "/entrypoint.sh /run." 29 minutes ago Up 29 minutes datanode2
988dc6e7c0f3 bde2020/hadoop-datanode:1.0.0 "/entrypoint.sh /run." 29 minutes ago Up 29 minutes datanode3
7ff18f1ca4eb bde2020/hive "/entrypoint.sh /bin/" 29 minutes ago Up 29 minutes 10000/tcp, 10002/tcp hive
e78c5b11e84e bde2020/hadoop-datanode:1.0.0 "/entrypoint.sh /run." 30 minutes ago Up 30 minutes datanode1
520d8e5337df bde2020/hadoop-namenode:1.0.0 "/entrypoint.sh /run." 31 minutes ago Up 31 minutes namenode

FOR Hive CLI when I try getting below error
$docker exec -it hive bash
root@hive:/opt# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in file:/opt/hive/conf/hive-log4j2.properties
Exception in thread "main" java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1550)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3080)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3108)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:521)
at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:494)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:709)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:645)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1548)
... 15 more
Caused by: javax.jdo.JDOFatalDataStoreException: Unable to open a test connection to the given database. JDBC url = jdbc:derby:;databaseName=/hive-metastore/metastore_db;create=true, username = APP. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------
java.sql.SQLException: Failed to start database '/hive-metastore/metastore_db' with class loader sun.misc.Launcher$AppClassLoader@5115a298, see the next exception for details.
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection40.(Unknown Source)
at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
at org.apache.derby.jdbc.Driver20.connect(Unknown Source)
at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:187)
at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)
at com.jolbox.bonecp.BoneCP.(BoneCP.java:416)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:483)
at org.datanucleus.store.rdbms.RDBMSStoreManager.(RDBMSStoreManager.java:296)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:606)
at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
at org.datanucleus.NucleusContextHelper.createStoreManagerForProperties(NucleusContextHelper.java:133)
at org.datanucleus.PersistenceNucleusContextImpl.initialise(PersistenceNucleusContextImpl.java:420)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:821)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:338)
at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:217)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:397)
at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:426)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:320)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:287)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:55)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:64)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:516)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:481)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:547)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:370)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:78)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5749)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:219)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:67)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1548)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3080)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3108)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:521)
at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:494)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:709)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:645)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.sql.SQLException: Failed to start database '/hive-metastore/metastore_db' with class loader sun.misc.Launcher$AppClassLoader@5115a298, see the next exception for details.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
... 75 more

WARN jdbc.HiveConnection: Failed to connect to localhost:10000

Hi,

this is my configuration:

hive-server:
    container_name:           hive-server
    image:                    bde2020/hive:2.3.2-postgresql-metastore
    env_file:
          - ./hive_build/hadoop-hive.env
    environment:
        HIVE_CORE_CONF_javax_jdo_option_ConnectionURL: "jdbc:postgresql://hive-metastore/metastore"
        SERVICE_PRECONDITION: "hive-metastore:9083"
    ports:
        - "10000:10000"
    
hive-metastore:
    container_name:           hive-metastore
    image:                    bde2020/hive:2.3.2-postgresql-metastore
    env_file:
        - ./hive_build/hadoop-hive.env
    command:                  /opt/hive/bin/hive --service metastore
    environment:
        SERVICE_PRECONDITION: "hadoop-namenode:50070 hadoop-datanode1:50075 hive-metastore-postgresql:5432"
    ports:
        - "9083:9083"

hive-metastore-postgresql:
    container_name:           hive-metastore-postgresql
    image:                    bde2020/hive-metastore-postgresql:2.3.0
    ports:
        - "5433:5432"

and HIVE should be connected to 2 more HDFS containers (hadoop-namenode, hadoop-datanode1) that I have built and that are working just great on the right ports.

When I run:

 docker-compose exec hive-server bash
 /opt/hive/bin/beeline -u jdbc:hive2://localhost:10000

I get:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://localhost:10000
19/04/30 12:21:53 [main]: WARN jdbc.HiveConnection: Failed to connect to localhost:10000
Could not open connection to the HS2 server. Please check the server URI and if the URI is correct, then ask the administrator to check the server status.
Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
Beeline version 2.3.2 by Apache Hive
beeline>

From docker-compose logs I don't see specific errors so the containers seem to work fine.

Any help on this please?
Thanks

Debian dependency not found, error

Got this while building the docker image... everything looks good up to this point...

`Ign http://deb.debian.org jessie InRelease
Get:1 http://deb.debian.org jessie-updates InRelease [16.3 kB]
Ign http://ftp.debian.org jessie-backports InRelease
Get:2 http://security.debian.org jessie/updates InRelease [44.9 kB]
Ign http://ftp.debian.org jessie-backports Release.gpg
Get:3 http://deb.debian.org jessie Release.gpg [1652 B]
Ign http://ftp.debian.org jessie-backports Release
Get:4 http://deb.debian.org jessie Release [77.3 kB]
Err http://ftp.debian.org jessie-backports/main amd64 Packages

Err http://ftp.debian.org jessie-backports/main amd64 Packages

Err http://ftp.debian.org jessie-backports/main amd64 Packages

Err http://ftp.debian.org jessie-backports/main amd64 Packages

Err http://ftp.debian.org jessie-backports/main amd64 Packages
404 Not Found
Get:5 http://deb.debian.org jessie-updates/main amd64 Packages [20 B]
Get:6 http://deb.debian.org jessie/main amd64 Packages [9098 kB]
Get:7 http://security.debian.org jessie/updates/main amd64 Packages [932 kB]
Fetched 10.2 MB in 12s (788 kB/s)
W: Failed to fetch http://ftp.debian.org/debian/dists/jessie-backports/main/binary-amd64/Packages 404 Not Found

E: Some index files failed to download. They have been ignored, or old ones used instead.
The command '/bin/sh -c apt-get update && apt-get install -y wget procps && wget https://archive.apache.org/dist/hive/hive-$HIVE_VERSION/apache-hive-$HIVE_VERSION-bin.tar.gz && tar -xzvf apache-hive-$HIVE_VERSION-bin.tar.gz && mv apache-hive-$HIVE_VERSION-bin hive && wget https://jdbc.postgresql.org/download/postgresql-9.4.1212.jar -O $HIVE_HOME/lib/postgresql-jdbc.jar && rm apache-hive-$HIVE_VERSION-bin.tar.gz && apt-get --purge remove -y wget && apt-get clean && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100`

How can I access Yarn Web UI

Noob and want to know does this compose project launch Yarn in hadoop?
Aim is to ensure the OLAP query engage with mapreduce process

Spark app can't connect to HDFS: RPC response exceeds maximum data length

I'm trying to run a spark app that connects to HDFS using the docker-compose in this repo (which I have modified). The Spark container I am using is, I believe, able to connect to the HDFS container, but it receives a RPC error soon after:

I've tried a handful of things with no success, was wondering if anyone had an idea of how I can troubleshoot this:

java.io.IOException: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; Host Details : local host is: "sparkmaster/172.18.13.9"; destination host is: "datanode":50075;
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:808)
	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1495)
	at org.apache.hadoop.ipc.Client.call(Client.java:1437)
	at org.apache.hadoop.ipc.Client.call(Client.java:1347)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
	at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:874)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
	at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1697)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1491)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1488)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1503)
	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668)
	at com.mastercard.ess.schema.SchemaRegistry$.addSchema(SchemaRegistry.scala:45)
	at com.mastercard.ess.inputs.builder.KafkaInputBuilder$.build(KafkaInputBuilder.scala:20)
	at com.mastercard.ess.inputs.InputRegistrar$.registerInput(InputRegistrar.scala:22)
	at com.mastercard.ess.jobs.JobRegistrationManager$$anonfun$registerFromJson$1.apply(JobRegistrationManager.scala:197)
	at com.mastercard.ess.jobs.JobRegistrationManager$$anonfun$registerFromJson$1.apply(JobRegistrationManager.scala:179)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at com.mastercard.ess.jobs.JobRegistrationManager$.registerFromJson(JobRegistrationManager.scala:179)
	at com.mastercard.ess.jobs.JobRegistrationManager$.registerFromHDFSJson(JobRegistrationManager.scala:153)
	at com.mastercard.ess.jobs.JobRegistrationManager$.registerJobsAndMonitorChanges(JobRegistrationManager.scala:61)
	at com.mastercard.ess.StreamKafka$.main(StreamKafka.scala:45)
	at com.mastercard.ess.StreamKafka.main(StreamKafka.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length
	at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1810)
	at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1165)
	at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1061)

No space on DataNode

Hi,
I deployed the compose on a 3 node swarm but I cannot put any file on HDFS because there is no space available:

root@9362d3d9c816:/opt# hadoop fs -df
Filesystem            Size  Used  Available  Use%
hdfs://namenode:8020     0     0          0  NaN%

and I got this error if trying to copy a text file:

root@9362d3d9c816:/opt# hadoop fs -copyFromLocal test.txt /user/hive/
18/05/29 15:41:25 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hive/test.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

adding an all-in-one prestodb?

I have a docker image shawnzhu/prestodb which we used internally to query data on HDFS via PrestoDB where the Hive is provisioned over docker compose from this repo.

@earthquakesan If you're interested in this idea, I will create a PR to the docker-compose.yml Or let me know if any other ways to get this feature in. Thanks!

Add export of port 8020 for namenode

please add export for port 8020 - I needed it when I was working with kafka connect and hdfs connector. It is a one line change:

ports:
  - "50070:50070"
  - "8020:8020"

LOAD DATA unable to locate files

Hello,

I am trying to load data from files into HIVE using this docker compose environment, but it is unable to locate the files. I am PUTTING the data files through the REST API of the name node, port 50070 (If I remember it correctly) without problems, I can see the files through the file browser and running hdfs commands inside the name node container but the hive server docker container doesn't seem to recognise the same directory tree as the name node, so when I am using the hive instruction to LOAD DATA with the putted path it fails. Inside the docker container of hive server, obviously hdfs doesn't have that file either. It is only able to load files if I create them locally in the hive server container (using a volume because I didn't find an editor :p).

I haven't changed anything in the compose file, appart from using the version 3 and creating a common network and defining dependencies instead of links.

Would you able to point me in the right direction? How can I change the hive configuration to check I am writing to the same name name node?

I am new to hadoop and hive! So apologize for any architecture misconception.

Thanks,
Yeray

Not able to start hive metastore postgresql container

I am getting the below error while using Mac M1 (apple chip). My docker desktop is up and running fine.
All other container start, except this.

hive-metastore-postgresql: forward host lookup failed: Unknown host
docker-hive-master-hive-metastore-1 | [11/100] check for hive-metastore-postgresql:5432...
docker-hive-master-hive-metastore-1 | [11/100] hive-metastore-postgresql:5432 is not available yet
docker-hive-master-hive-metastore-1 | [11/100] try in 5s once again ...

[Question] how to run this image on the Windows 10 platform

Hi,
This is not issue. It is a question when i am trying to run this image on the windows 10 system.
It is giving me the error "image operating system "linux" cannot be used on this platform".
so the question is this image only works for linux OS.
Dileep

How to connect hive remotely?

I deployed it in Windows by docker desktop (wsl2)

"jdbc:hive2://localhost:10000" doesn't works:

package com.aster;

import java.sql.*;

public class HiveJdbcTest {
    private static String driverName = "org.apache.hive.jdbc.HiveDriver";

    public static void main(String[] args) throws SQLException {

        try {
            Class.forName(driverName);
        } catch (ClassNotFoundException e) {
            e.printStackTrace();
        }

        // 代码打包后,运行JAR包的环境需要在hosts文件中
        // 把localhost映射到集群的公网IP地址(或内网IP地址)。
        Connection con = DriverManager.getConnection(
                "jdbc:hive2://localhost:10000/deafult");

        Statement stmt = con.createStatement();

        String sql = "select * from pokes limit 10";
        ResultSet res = stmt.executeQuery(sql);

        while (res.next()) {
            System.out.println(res.getString(0) + "\t" + res.getString(1));
        }

    }
}
Exception in thread "main" java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000/deafult: java.net.SocketException: Connection reset
	at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:224)
	at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
	at java.sql.DriverManager.getConnection(DriverManager.java:664)
	at java.sql.DriverManager.getConnection(DriverManager.java:270)
	at com.aster.HiveJdbcTest.main(HiveJdbcTest.java:18)
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset
	at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
	at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
	at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:178)
	at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:307)
	at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
	at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:311)
	at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:196)
	... 4 more
Caused by: java.net.SocketException: Connection reset
	at java.net.SocketInputStream.read(SocketInputStream.java:210)
	at java.net.SocketInputStream.read(SocketInputStream.java:141)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
	at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
	at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
	... 10 more

I also tried to connect to hive by IDEA, but test connection failed: Driver class 'org.apache.hive.service.rpc.thrift.TCLIService$Iface' not found.

upload external file to hdfs

Hi...

i just use docker-hive and its great.
i would like to know which of the container is the HDFS.
i want to upload a file and to it and use it for my external table.

  1. which container is the HDFS?
  2. how can i upload the file to that container and to which path ?

not listening on 10000/10002 when i upgrade to 3.1.2

with bde2020/hadoop 3.2.1, postgres is : bde2020/hive-metastore-postgresql:3.1.0

dockerfile:

FROM bde2020/hadoop-base:2.0.0-hadoop3.2.1-java8

ARG HIVE_VERSION=3.1.2

ENV HIVE_VERSION=${HIVE_VERSION:-3.1.2}

ENV HIVE_HOME /opt/hive
ENV PATH $HIVE_HOME/bin:$PATH
ENV HADOOP_HOME /opt/hadoop-$HADOOP_VERSION

WORKDIR /opt

COPY apache-hive-${HIVE_VERSION}-bin.tar.gz /opt
COPY sources.list /etc/apt/

RUN apt-get update && apt-get install -y  procps && \
	tar -xzf apache-hive-$HIVE_VERSION-bin.tar.gz && \
	mv apache-hive-$HIVE_VERSION-bin hive && \
	rm apache-hive-$HIVE_VERSION-bin.tar.gz && \
	rm  -f ./hive/lib/guava-19.0.jar && \
	cp ./hadoop-${HADOOP_VERSION}/share/hadoop/hdfs/lib/guava-27.0-jre.jar ./hive/lib/ && \
	apt-get clean && \
	rm -rf /var/lib/apt/lists/*

COPY postgresql-42.2.14.jar $HIVE_HOME/lib/postgresql-jdbc.jar

#Custom configuration goes here
ADD conf/hive-site.xml $HIVE_HOME/conf
ADD conf/beeline-log4j2.properties $HIVE_HOME/conf
ADD conf/hive-env.sh $HIVE_HOME/conf
ADD conf/hive-exec-log4j2.properties $HIVE_HOME/conf
ADD conf/hive-log4j2.properties $HIVE_HOME/conf
ADD conf/ivysettings.xml $HIVE_HOME/conf
ADD conf/llap-daemon-log4j2.properties $HIVE_HOME/conf

COPY startup.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/startup.sh

COPY entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/entrypoint.sh

EXPOSE 10000
EXPOSE 10002

ENTRYPOINT ["entrypoint.sh"]
CMD startup.sh

hive server logs are

Configuring hive
 - Setting datanucleus.autoCreateSchema=false
 - Setting javax.jdo.option.ConnectionPassword=hive
 - Setting hive.metastore.uris=thrift://metastore:9083
 - Setting javax.jdo.option.ConnectionURL=jdbc:postgresql://metastore/metastore
 - Setting javax.jdo.option.ConnectionUserName=hive
 - Setting javax.jdo.option.ConnectionDriverName=org.postgresql.Driver
Configuring for multihomed network
[1/100] check for metastore:9083...
[1/100] metastore:9083 is not available yet
[1/100] try in 5s once again ...
[2/100] check for metastore:9083...
[2/100] metastore:9083 is not available yet
[2/100] try in 5s once again ...
[3/100] check for metastore:9083...
[3/100] metastore:9083 is not available yet
[3/100] try in 5s once again ...
[4/100] check for metastore:9083...
[4/100] metastore:9083 is not available yet
[4/100] try in 5s once again ...
[5/100] metastore:9083 is available.
mkdir: `/tmp': File exists
2020-07-22 07:15:33: Starting HiveServer2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-3.2.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = c86e1a7f-e18c-4c04-a8ba-1179c235fdce
Hive Session ID = 6f036ed3-5041-4479-ad99-eebc193a110a
...
...
...
Hive Session ID = xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxxxxx
# quit after print about 20 lines 

command netstat show nothing on 10000/10002.

compose run error

Have you ever encountered such an error during docker-compose up for this container?

mkosinski@mkosinski-E540:~/docker-hive$  docker network create hadoop
7a8627aba8bc0e37d2e438e026db3107fc7620e81d72ef40e81b73fcbc9881e8
mkosinski@mkosinski-E540:~/docker-hive$  docker-compose up
Pulling namenode (bde2020/hadoop-namenode:1.0.0)...
1.0.0: Pulling from bde2020/hadoop-namenode
6474ebfb7a3e: Already exists
a3ed95caeb02: Already exists
cc291fac9843: Already exists
a6d9a463c720: Already exists
10c553e17298: Already exists
23026f469b0b: Already exists
e73a3f9476ba: Already exists
8d344294cd55: Already exists
40028b4987ae: Already exists
cc226e6c8064: Already exists
bacb0e395a28: Already exists
10d56ed80b63: Already exists
43bc867263cb: Already exists
94259324302c: Already exists
2fa4c7548814: Already exists
Digest: sha256:c07c915f7e90f8ae312083afed35e79f74c40126f15cc516d97b4d86be62a917
Status: Downloaded newer image for bde2020/hadoop-namenode:1.0.0
Pulling datanode1 (bde2020/hadoop-datanode:1.0.0)...
1.0.0: Pulling from bde2020/hadoop-datanode
6474ebfb7a3e: Already exists
a3ed95caeb02: Already exists
6bdc69916002: Already exists
260a74adaeb3: Already exists
e845ba0c0170: Already exists
8767d43e9712: Already exists
8c55e2d06954: Already exists
342f6ce559e7: Already exists
edcc976eae6d: Already exists
14921796925c: Already exists
d16284cecb54: Already exists
a577c290d944: Already exists
4cb3543af39b: Already exists
c2c5bc740b61: Already exists
1a800ce111be: Already exists
Digest: sha256:e35ae0bf7a8de059d1082d62fefa3e5f8e03984a9d83134fa52d1a4bf4015ced
Status: Downloaded newer image for bde2020/hadoop-datanode:1.0.0
Pulling hive (bde2020/hive:latest)...
latest: Pulling from bde2020/hive
6474ebfb7a3e: Already exists
a3ed95caeb02: Already exists
2dd6e1627e0d: Already exists
fd7a2d48248b: Already exists
d4ced813e691: Already exists
ceba6e3f04f7: Already exists
4483d2217c20: Already exists
34b26e66e536: Already exists
ee65fe633f9d: Already exists
d511ed7a5353: Already exists
a0a4a483f533: Already exists
f14e7f36bf10: Already exists
a88a8aecfb20: Pull complete
36aed70eec60: Pull complete
6be34e49cb4f: Pull complete
d9fde58d5ed4: Pull complete
40ba71786be4: Pull complete
5b3ac5ace6fc: Pull complete
f0653bb3244f: Pull complete
275ccce42392: Pull complete
7e259572ae18: Pull complete
ba7d9d3179ad: Pull complete
Digest: sha256:a967a78b2d88aaee1de82a55552f24512d99f60e094a52fef6448fe7f856d404
Status: Downloaded newer image for bde2020/hive:latest
Starting 58900ede7571_namenode
Starting datanode1
Recreating a08ec6781658_a08ec6781658_hive
Starting datanode3
Starting datanode2

ERROR: for hive  No such image: sha256:f9dac89ff5c716eef59a61168cf6a0ca2f726cb3f625e1e41175c4c317f81160
Traceback (most recent call last):
  File "<string>", line 3, in <module>
  File "compose/cli/main.py", line 63, in main
AttributeError: 'ProjectError' object has no attribute 'msg'
docker-compose returned -1

Cannot connect to hive metastore using dbeaver

I am trying to acess the hive metastore because i need the table metadata information that are not available on the describe formmated method but whenever i try to connect to it using dbeaver i get an error saying that the connection failed i'm trying to authenticate using the following jdbc url: jdbc:postgresql://hive-metastore-postgresql:5432/metastore with the default username and password contained on the hive-site .xml which is hive and hive

Presto cannot find the hive tables

Used Presto CLI:
./presto.jar --server localhost:8080 --catalog hive --schema [my_schema]

[my_schema] is in the result of show schemas;, but Presto cannot find the hive tables (verified in hive) in [my_schema] (show tables; returns 0 rows)

Testing on docker swarm

provide docker-compose v2 definition compatible with docker swarm and instructions on how to run it inside swarm

No such file or directory on hive_server start

Trying to start hive_server in docker-machine

panic: standard_init_linux.go:175: exec user process caused "no such file or directory" [recovered]
panic: standard_init_linux.go:175: exec user process caused "no such file or directory"
2016-11-28T13:17:30.814269724Z 
goroutine 1 [running, locked to thread]:
panic(0x88f8a0, 0xc82012e690)
/usr/local/go/src/runtime/panic.go:481 +0x3e6
github.com/urfave/cli.HandleAction.func1(0xc8200f12e8)
/tmp/tmp.n151sEscRu/src/github.com/opencontainers/runc/Godeps/_workspace/src/github.com/urfave/cli/app.go:478 +0x38e
panic(0x88f8a0, 0xc82012e690)
/usr/local/go/src/runtime/panic.go:443 +0x4e9
github.com/opencontainers/runc/libcontainer.(*LinuxFactory).StartInitialization.func1(0xc8200f0bf8, 0xc82001a0c8, 0xc8200f0d08)
/tmp/tmp.n151sEscRu/src/github.com/opencontainers/runc/Godeps/_workspace/src/github.com/opencontainers/runc/libcontainer/factory_linux.go:259 +0x136
github.com/opencontainers/runc/libcontainer.(*LinuxFactory).StartInitialization(0xc820051630, 0x7fadd3c9c728, 0xc82012e690)
/tmp/tmp.n151sEscRu/src/github.com/opencontainers/runc/Godeps/_workspace/src/github.com/opencontainers/runc/libcontainer/factory_linux.go:277 +0x5b1
main.glob.func8(0xc82006ea00, 0x0, 0x0)
/tmp/tmp.n151sEscRu/src/github.com/opencontainers/runc/main_unix.go:26 +0x68
reflect.Value.call(0x7f45a0, 0x9a4d88, 0x13, 0x8ebac8, 0x4, 0xc8200f1268, 0x1, 0x1, 0x0, 0x0, ...)
/usr/local/go/src/reflect/value.go:435 +0x120d
reflect.Value.Call(0x7f45a0, 0x9a4d88, 0x13, 0xc8200f1268, 0x1, 0x1, 0x0, 0x0, 0x0)
/usr/local/go/src/reflect/value.go:303 +0xb1
github.com/urfave/cli.HandleAction(0x7f45a0, 0x9a4d88, 0xc82006ea00, 0x0, 0x0)
/tmp/tmp.n151sEscRu/src/github.com/opencontainers/runc/Godeps/_workspace/src/github.com/urfave/cli/app.go:487 +0x2ee
github.com/urfave/cli.Command.Run(0x8ee970, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x984240, 0x51, 0x0, ...)
/tmp/tmp.n151sEscRu/src/github.com/opencontainers/runc/Godeps/_workspace/src/github.com/urfave/cli/command.go:191 +0xfec
github.com/urfave/cli.(*App).Run(0xc820001500, 0xc82000a100, 0x2, 0x2, 0x0, 0x0)
/tmp/tmp.n151sEscRu/src/github.com/opencontainers/runc/Godeps/_workspace/src/github.com/urfave/cli/app.go:240 +0xaa4
main.main()
/tmp/tmp.n151sEscRu/src/github.com/opencontainers/runc/main.go:137 +0xe24

Presto connector version

Is it possible to update the presto server to more recent version? 0.181 is fairly old. For example, I need to test the following syntax and it doesn't work in 0.181

SELECT * FROM sandbox."table_name$partitions"

Getting qemu: uncaught target signal 11 (Segmentation fault) - core dumped Segmentation fault

I try to run docker-hive using following commands, but I'm not able to start hive server, and getting some segmentation fault error.

$ docker-compose up -d
Docker Compose is now in the Docker CLI, try `docker compose up`

Recreating docker-hive_datanode_1                  ... done
Recreating docker-hive_hive-server_1               ... done
Recreating docker-hive_namenode_1                  ... done
Recreating docker-hive_hive-metastore-postgresql_1 ... done
Recreating docker-hive_presto-coordinator_1        ... done
Recreating docker-hive_hive-metastore_1            ... done
$ docker-compose exec hive-server bash
root@2c02ebc72a4d:/opt# /opt/hive/bin/beeline -u jdbc:hive2://localhost:10000
qemu: uncaught target signal 11 (Segmentation fault) - core dumped
Segmentation fault

I see many people have faced similar issue. In this comment, it is suggested to supply arm64 or multi-arch image. Is this done, or possible to do in future? I'm facing this issue on Mac M1.

Spark not able to connect to hive metastore

Hi,

I have spark running on anothe container and I have copied the hive-site.xml to the spark/conf folder, However, I am not able to connect to hive-metastore using thrift://hive-metastore:9083.

Docker containers are here
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c3cdade997d1 shawnzhu/prestodb:0.181 "./bin/launcher run" 4 hours ago Up 4 hours 8080/tcp, 0.0.0.0:8089->8089/tcp docker-hive-master2_presto-coordinator_1
31d31e19936b bde2020/hadoop-datanode:2.0.0-hadoop2.7.4-java8 "/entrypoint.sh /run…" 4 hours ago Up 4 hours (healthy) 0.0.0.0:50075->50075/tcp docker-hive-master2_datanode_1
67a3f25604df bde2020/hive:2.3.2-postgresql-metastore "entrypoint.sh /opt/…" 4 hours ago Up 4 hours 10000/tcp, 0.0.0.0:9083->9083/tcp, 10002/tcp docker-hive-master2_hive-metastore_1
23f8ff772a49 bde2020/hive-metastore-postgresql:2.3.0 "/docker-entrypoint.…" 4 hours ago Up 4 hours 5432/tcp docker-hive-master2_hive-metastore-postgresql_1
495d5aa65d46 bde2020/hadoop-namenode:2.0.0-hadoop2.7.4-java8 "/entrypoint.sh /run…" 4 hours ago Up 4 hours (healthy) 0.0.0.0:50070->50070/tcp docker-hive-master2_namenode_1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.