Skip to content
Open in Gitpod

Quickstart with Docker-Compose

Requirement

For optimal performance, we suggest utilizing a Linux kernel version of 4.14 or higher when initializing an HStreamDB Cluster.

TIP

In the case it is not possible for the user to use a Linux kernel version of 4.14 or above, we recommend adding the option --enable-dscp-reflection=false to HStore while starting the HStreamDB Cluster.

Installation

Install docker

TIP

If you have already installed docker, you can skip this step.

See Install Docker Engine, and install it for your operating system. Please carefully check that you have met all prerequisites.

Confirm that the Docker daemon is running:

sh
docker version

TIP

On Linux, Docker needs root privileges. You can also run Docker as a non-root user, see Post-installation steps for Linux.

Install docker compose

TIP

If you have already installed docker compose, you can skip this step.

See Install Docker Compose, and install it for your operating system. Please carefully check that you met all prerequisites.

sh
docker-compose version

Start HStreamDB Services

WARNING

Do NOT use this configuration in your production environment!

Create a docker-compose.yaml file for docker compose, you can download or paste the following contents:

yaml
version: "3.5"

services:
  hserver:
    image: hstreamdb/hstream:v0.19.0
    depends_on:
      - zookeeper
      - hstore
    ports:
      - "127.0.0.1:6570:6570"
    expose:
      - 6570
    networks:
      - hstream-quickstart
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /tmp:/tmp
      - data_store:/data/store
    command:
      - bash
      - "-c"
      - |
        set -e
        /usr/local/script/wait-for-storage.sh hstore 6440 zookeeper 2181 600 \
        /usr/local/bin/hstream-server \
        --bind-address 0.0.0.0 --port 6570 \
        --internal-port 6571 \
        --server-id 100 \
        --seed-nodes "$$(hostname -I | awk '{print $$1}'):6571" \
        --advertised-address $$(hostname -I | awk '{print $$1}') \
        --metastore-uri zk://zookeeper:2181 \
        --store-config /data/store/logdevice.conf \
        --store-admin-host hstore --store-admin-port 6440 \
        --store-log-level warning \
        --io-tasks-path /tmp/io/tasks \
        --io-tasks-network hstream-quickstart

  hstore:
    image: hstreamdb/hstream:v0.19.0
    networks:
      - hstream-quickstart
    volumes:
      - data_store:/data/store
    command:
      - bash
      - "-c"
      - |
        set -ex
        # N.B. "enable-dscp-reflection=false" is required for linux kernel which
        # doesn't support dscp reflection, e.g. centos7.
        /usr/local/bin/ld-dev-cluster --root /data/store \
        --use-tcp --tcp-host $$(hostname -I | awk '{print $$1}') \
        --user-admin-port 6440 \
        --param enable-dscp-reflection=false \
        --no-interactive

  zookeeper:
    image: zookeeper:3.7
    expose:
      - 2181
    networks:
      - hstream-quickstart
    volumes:
      - data_zk_data:/data
      - data_zk_datalog:/datalog

  hserver-init:
    image: hstreamdb/hstream:v0.19.0
    depends_on:
      - hserver
    networks:
      - hstream-quickstart
    command:
      - bash
      - "-c"
      - |
        timeout=60
        until ( \
            /usr/local/bin/hadmin server --host hserver --port 6570 status \
        ) >/dev/null 2>&1; do
            >&2 echo 'Waiting for servers ...'
            sleep 1
            timeout=$$((timeout - 1))
            [ $$timeout -le 0 ] && echo 'Timeout!' && exit 1;
        done; \
        /usr/local/bin/hadmin server --host hserver --port 6570 init

  hstream-exporter:
    depends_on:
      hserver-init:
        condition: service_completed_successfully
    image: hstreamdb/hstream-exporter:latest
    networks:
      - hstream-quickstart
    command:
      - bash
      - "-c"
      - |
        set -ex
        hstream-exporter --addr hstream://hserver:6570

  gen_prometheus_config:
    image: prom/prometheus
    entrypoint:
      - sh
      - "-c"
      - |
        echo '
        global:
          scrape_interval: 15s
          evaluation_interval: 15s

          external_labels:
            monitor: "hstream-monitor"

        scrape_configs:
          - job_name: "hstream_metrics"
            scrape_interval: 5s
            static_configs:
              - targets:
                  - hstream-exporter:9200
        ' > /prometheus/prometheus.yml
    volumes:
      - data_prom_config:/prometheus

  prometheus:
    image: prom/prometheus
    depends_on:
      gen_prometheus_config:
        condition: service_completed_successfully
    expose:
      - 9090
    networks:
      - hstream-quickstart
    ports:
      - "9090:9090"
    volumes:
      - data_prom_config:/etc/prometheus

  console:
    image: hstreamdb/hstream-console:latest
    depends_on:
      hserver-init:
        condition: service_completed_successfully
    expose:
      - 5177
    networks:
      - hstream-quickstart
    environment:
      - SERVER_PORT=5177
      - PROMETHEUS_URL=http://prometheus:9090
      - HSTREAM_PRIVATE_ADDRESS=hserver:6570
    ports:
      - "127.0.0.1:5177:5177"

networks:
  hstream-quickstart:
    name: hstream-quickstart

volumes:
  data_store:
    name: quickstart_data_store
  data_zk_data:
    name: quickstart_data_zk_data
  data_zk_datalog:
    name: quickstart_data_zk_datalog
  data_prom_config:
    name: quickstart_data_prom_config

then run:

sh
docker-compose -f quick-start.yaml up

If you see some thing like this, then you have a running hstream:

txt
hserver_1    | [INFO][2021-11-22T09:15:18+0000][app/server.hs:137:3][thread#67]************************
hserver_1    | [INFO][2021-11-22T09:15:18+0000][app/server.hs:145:3][thread#67]Server started on port 6570
hserver_1    | [INFO][2021-11-22T09:15:18+0000][app/server.hs:146:3][thread#67]*************************

TIP

You can also run in background:

sh
docker-compose -f quick-start.yaml up -d

TIP

If you want to show logs of server, run:

sh
docker-compose -f quick-start.yaml logs -f hserver

Connect HStreamDB with HSTREAM CLI

HStreamDB can be directly managed using the hstream command-line interface (CLI), which is included in the hstreamdb/hstream image.

Start an instance of hstreamdb/hstream using Docker:

sh
docker run -it --rm --name some-hstream-cli --network host hstreamdb/hstream:v0.19.0 bash

Create stream

To create a stream, you can use hstream stream create command. Now we will create a stream with 2 shard

sh
hstream stream create demo --shards 2
sh
+-------------+---------+----------------+-------------+
| Stream Name | Replica | Retention Time | Shard Count |
+-------------+---------+----------------+-------------+
| demo        | 1       | 604800 seconds | 2           |
+-------------+---------+----------------+-------------+

Write data to streams

The hstream stream append command can be used to write data to a stream in a interactive shell.

sh
hstream stream append demo --separator "@"
  • With the --separator option, you can specify the separator for key. The default separator is "@". Using the separator, you can assign a key to each record. Record with same key will be append into same shard of the stream.
sh
key1@{"temperature": 22, "humidity": 80}
key1@{"temperature": 32, "humidity": 21, "tag": "test1"}
hello world!

Here we have written three pieces of data. The first two are in JSON format and are associated with key1. The last one does not specify a key.

For additional information, you can use hstream stream append -h.

Read data from a stream

To read data from a particular stream, the hstream stream read-stream command is used.

sh
hstream stream read-stream demo
sh
timestamp: "1692774821444", id: 1928822601796943-8589934593-0, key: "key1", record: {"humidity":80.0,"temperature":22.0}
timestamp: "1692774844649", id: 1928822601796943-8589934594-0, key: "key1", record: {"humidity":21.0,"tag":"test1","temperature":32.0}
timestamp: "1692774851017", id: 1928822601796943-8589934595-0, key: "", record: hello world!

You can also set a read offset, which can be one of the following types:

  • earliest: This seeks to the first record of the stream.
  • latest: This seeks to the end of the stream.
  • timestamp: This seeks to a record with a specific creation timestamp.

For instance:

sh
hstream stream read-stream demo --from 1692774844649 --total 1
sh
timestamp: "1692774844649", id: 1928822601796943-8589934594-0, key: "key1", record: {"humidity":21.0,"tag":"test1","temperature":32.0}

Start HStreamDB's interactive SQL CLI

sh
docker run -it --rm --name some-hstream-cli --network host hstreamdb/hstream:v0.19.0 hstream --port 6570 sql

If everything works fine, you will enter an interactive CLI and see help information like

txt
      __  _________________  _________    __  ___
     / / / / ___/_  __/ __ \/ ____/   |  /  |/  /
    / /_/ /\__ \ / / / /_/ / __/ / /| | / /|_/ /
   / __  /___/ // / / _, _/ /___/ ___ |/ /  / /
  /_/ /_//____//_/ /_/ |_/_____/_/  |_/_/  /_/

Command
  :h                           To show these help info
  :q                           To exit command line interface
  :help [sql_operation]        To show full usage of sql statement

SQL STATEMENTS:
  To create a simplest stream:
    CREATE STREAM stream_name;

  To create a query select all fields from a stream:
    SELECT * FROM stream_name EMIT CHANGES;

  To insert values to a stream:
    INSERT INTO stream_name (field1, field2) VALUES (1, 2);

>

Run a continuous query over the stream

Now we can run a continuous query over the stream we just created by SELECT query.

The query will output all records from the demo stream whose humidity is above 70 percent.

sql
SELECT * FROM demo WHERE humidity > 70 EMIT CHANGES;

It seems that nothing happened. But do not worry because there is no data in the stream now. Next, we will fill the stream with some data so the query can produce output we want.

Start another CLI session

Start another CLI session, this CLI will be used for inserting data into the stream.

sh
docker exec -it some-hstream-cli hstream --port 6570 sql

Insert data into the stream

Run each of the given INSERT statement in the new CLI session and keep an eye on the CLI session created in (2).

sql
INSERT INTO demo (temperature, humidity) VALUES (22, 80);
INSERT INTO demo (temperature, humidity) VALUES (15, 20);
INSERT INTO demo (temperature, humidity) VALUES (31, 76);
INSERT INTO demo (temperature, humidity) VALUES ( 5, 45);
INSERT INTO demo (temperature, humidity) VALUES (27, 82);
INSERT INTO demo (temperature, humidity) VALUES (28, 86);

If everything works fine, the continuous query will output matching records in real time:

json
{"humidity":{"$numberLong":"80"},"temperature":{"$numberLong":"22"}}
{"humidity":{"$numberLong":"76"},"temperature":{"$numberLong":"31"}}
{"humidity":{"$numberLong":"82"},"temperature":{"$numberLong":"27"}}
{"humidity":{"$numberLong":"86"},"temperature":{"$numberLong":"28"}}

Start Discovery HStreamDB using CONSOLE

The HStreamDB Console is the management panel for HStreamDB. You can use it to manage most resources of HStreamDB, perform data reading and writing, execute SQL queries, and more.

You can open the Console panel by entering http://localhost:5177 into your browser, for more details about the Console, please check Get Started on HStream Console.

Now, you can start exploring HStreamDB with joy.