Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: mongo requires alternate shell in command to work as shard config server #717

Open
theDanielJLewis opened this issue Nov 26, 2024 · 9 comments

Comments

@theDanielJLewis
Copy link

theDanielJLewis commented Nov 26, 2024

Error Message and Logs

BadValue: Cannot start a configsvr as a standalone server. Please use the option --replSet to start the node as a replica set.

Steps to Reproduce

  1. Pull a MongoDB image (version 6, 7, or 8—it doesn't matter).
  2. Make the following mongod.conf:
sharding:
  clusterRole: configsvr
replication:
  replSetName: "cfg0"
  1. Create a docker-compose.yml file for starting mongo similar to this (simplified for illustration):
services:
    mongocfg1:
        image: 'mongo:8'
        command: 'mongod --config /etc/mongo/mongod.conf'
        container_name: mongocfg1
        # ...
        volumes:
            - 'mongodb-configdb:/data/configdb'
            - 'mongodb-db:/data/db'
            -
                type: bind
                source: /data/mongod.conf
                target: /etc/mongo/mongod.conf
                read_only: true
volumes:
    mongodb-configdb:
        name: mongodb-configdb
        external: false
    mongodb-db:
        name: mongodb-db
        external: false
  1. Start mongo with docker compose up.
  2. MongoDB will not start and will complain that the replica set option is missing, even though it's clearly there in the mongod.conf.

Additional Information

After a lot of digging and experimenting, I found the strangest cause for this. It's the command line of docker-compose.yml. Whether I make it command: 'mongod --config /etc/mongo/mongod.conf' or command: --config /etc/mongo/mongod.conf, it will not read the replica set options when including the configsvr option. And it doesn't seem to matter what I change in mongod.conf.

However, if we replace the command line with:

 command: "/bin/sh -c 'exec mongod --config /etc/mongo/mongod.conf'"

Or:

entrypoint: "/bin/sh"
command: "-c 'exec mongod --config /etc/mongo/mongod.conf'"

And then run docker compose up, mongod appears to start fine.

Aside: even though the error message is complaining about missing replica set configuration, adding --replSet NAME to the command line still didn't fix it. It seems the only way to use Docker to start a shard config server is by doing the shell trick as shown above.

Although I've found this workaround, this is not the way the Docker image documentation says to do it, and it's not possible with systems that take control of docker-compose.yml (such as Coolify), forcing it back to command: 'mongod --config /etc/mongo/mongod.conf', which should work anyway!

For reference, here's the MongoDB documentation on creating a shard config server. Yes, it points out binding an IP, but I've noticed that makes no difference with this problem.

@LaurentGoderre
Copy link
Member

LaurentGoderre commented Nov 26, 2024

Is your mongod.conf really on your host at /data/mongod.conf?

@theDanielJLewis
Copy link
Author

Yes, for the sake of this discussion and my simplified example. And I know the Docker container is loading it because that's where it's getting the configsrv option from. And if I remove those sharding-specific lines from the .conf file, mongo starts appropriately. So there's no problem accessing the file and its contents. The problem is that the docker image is somehow not handling that file properly (or completely?) when loaded as I showed, but it does handle the exact same file correctly if I change the command line as shown.

@LaurentGoderre
Copy link
Member

I am unable to reproduce this locally

@LaurentGoderre
Copy link
Member

Is it possible you have some dangling volumes? Can you do a docker-compose down -v to clear any existing volumes that may be holding to old values?

@yosifkit
Copy link
Member

I had a partial fix for this in #600, so this is part of #509

@theDanielJLewis
Copy link
Author

theDanielJLewis commented Nov 26, 2024

Big progress in tracking this down! It seems to be a problem only when setting MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD as environment variables.

Here's a complete docker-compose.yml (replace the absolute project path as appropriate for your system):

services:
  mongocfg1:
    image: "mongo:8"
    command: "mongod --config /etc/mongo/mongod.conf"
    container_name: mongocfg1
#    environment:
#      - MONGO_INITDB_ROOT_USERNAME=root
#      - MONGO_INITDB_ROOT_PASSWORD=pass123
#      - MONGO_INITDB_DATABASE=default
    restart: unless-stopped
    healthcheck:
      test:
        - CMD
        - echo
        - ok
      interval: 5s
      timeout: 5s
      retries: 10
      start_period: 5s
    mem_limit: "0"
    memswap_limit: "0"
    mem_swappiness: 60
    mem_reservation: "0"
    cpus: 0.0
    cpu_shares: 1024
    volumes:
      - "mongodb-configdb:/users/me/Dropbox/dev-projects/mongodb/configdb"
      - "mongodb-db:/users/me/Dropbox/dev-projects/mongodb/db"
      - "/users/me/Dropbox/dev-projects/mongodb/mongo-mount:/tmp/root"
      - type: bind
        source: /users/me/Dropbox/dev-projects/mongodb/mongod.conf
        target: /etc/mongo/mongod.conf
        read_only: true
      # - type: bind
      #   source: /users/me/Dropbox/dev-projects/mongodb/docker-entrypoint-initdb.d
      #   target: /docker-entrypoint-initdb.d
      #   read_only: true
volumes:
  mongodb-configdb:
    name: mongodb-configdb
    external: false
  mongodb-db:
    name: mongodb-db
    external: false

Starting this as is will work fine. But start over after docker compose down -v and uncomment the environment section, restart fresh, and this fails.

The same thing happens if commenting out the environment section but uncommenting that last volume bind.

In this case, that last volume bind points to a JS file with these values:

db = db.getSiblingDB("default")
db.createCollection("init_collection")
db.createUser({
  user: "root",
  pwd: "pass123",
  roles: [{ role: "readWrite", db: "default" }],
})

So it seems this error is related to these init commands, but I'm not quite sure I understand why, yet.

@theDanielJLewis
Copy link
Author

@yosifkit:

I had a partial fix for this in #600, so this is part of #509

I had seen that in my searching for a solution before posting, but I didn't see the connection with my issue until now. Will #600 ever be merged at this point, almost two years later?

@yosifkit
Copy link
Member

I can definitely rebase it for the newer releases, but I was never really sure that it was a great fix. We didn't really get feedback on it. Looking at this and #509, maybe 7-8 people have run into this problem in the past two years, so it hasn't been prioritized.

@theDanielJLewis
Copy link
Author

I could see it becoming a higher priority as systems like Coolify are becoming more popular to fully manage a server using Docker containers. That's what I'm doing, and it's looking like I won't get to do so with my MongoDB shared setup because of this unresolved issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants