Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

password of default user is not propagated to shards #265

Closed
ralfbecher opened this issue Feb 22, 2020 · 6 comments
Closed

password of default user is not propagated to shards #265

ralfbecher opened this issue Feb 22, 2020 · 6 comments
Labels
feature ongoing discussion Issue is under discussion, no decision made so far

Comments

@ralfbecher
Copy link

Hi,

when creating a cluster where the password of default user is set in the manifest, it will not be propagated to remote_servers.xml. This leads to errors using umbrella tables:

From this example: https://github.com/Altinity/clickhouse-operator/blob/master/docs/replication_setup.md

SELECT * FROM events;

Code: 194, e.displayText() = DB::Exception: Received from chi-test-cho-with-shards-simple-1-0:9000. DB::Exception: Password required for user default. (version 20.1.4.14 (official build))

This is my manifest file:

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
  name: "test-cho-with-shards"
spec:
  defaults:
    templates:
      dataVolumeClaimTemplate: data-volume-template
  configuration:
    users:
      default/password: secret
    zookeeper:
      nodes:
        - host: zookeeper.zoons
    clusters:
      - name: "simple"
        layout:
          shardsCount: 3
  templates:
    volumeClaimTemplates:
      - name: data-volume-template
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 10Gi

I think I could insert user/password manually but am applied change in manifest will remove it again.

@alex-zaitsev
Copy link
Member

Good catch, @ralfbecher.

The common practice is to keep default user with no password BUT limited to cluster nodes only and only used for inter-cluster communication. This is how operator deploys default configuration.

I agree that operator should automatically propagate password to remote_servers, but it creates a extra security vulnerability, since password in remote_servers can not be masked/hashed, unlike user definition.

Probably we should forbid changing default user password and network settings at all.

@ralfbecher
Copy link
Author

Then you could add the functionality of hashed password to remote_servers...

@alex-zaitsev
Copy link
Member

@ralfbecher , unfortunately, ClickHouse needs a real password in order to connect to other servers, hashed password can not work. So either we tolerate plain passwords in remote_servers, or do not need any passwords here at all and rely on network security.

@gyrter
Copy link

gyrter commented Nov 25, 2020

Hello, everyone. There is little workaround for this problem - you can use networks access list to avoid password.
For example:

    users:
      default/networks/host_regexp:
      - ^chi-cluster-cluster-\d-\d.*$
      - ^chi-cluster-cluster-\d-\d-\d\..*$
      default/networks/ip:
      - "::1"
      - "127.0.0.1"
      default/profile: default
      default/quota: default
      default/connect_timeout_with_failover_ms: 1000```

@yuzhichang
Copy link
Contributor

ClickHouse/ClickHouse#13156 added secure inter-cluster query execution to v20.10.3.30+.
For each cluster in the metrika.xml, add an non-empty secret tag. This indicates clickhouse to use current query user to do remote queries.

    <remote_servers>
        <abc>
            <secret>foo</secret>
            <shard>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>192.168.101.106</host>
                    <port>9000</port>
                </replica>
                <replica>
                    <host>192.168.101.108</host>
                    <port>9000</port>
                </replica>
            </shard>
            <shard>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>192.168.101.110</host>
                    <port>9000</port>
                </replica>
                <replica>
                    <host>192.168.102.114</host>
                    <port>9000</port>
                </replica>
            </shard>
        </abc>
    </remote_servers>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature ongoing discussion Issue is under discussion, no decision made so far
Projects
None yet
Development

No branches or pull requests

5 participants