Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[reconfigurator] Reject clickhouse configurations from old generations #7347

Open
wants to merge 25 commits into
base: main
Choose a base branch
from

Conversation

karencfv
Copy link
Contributor

@karencfv karencfv commented Jan 15, 2025

Overview

This commit adds functionality to clickhouse-admin to keep track of the blueprint generation number. There is also a new validation check where if reconfigurator attempts to generate a configuration file from a previous generation, clickhouse-admin will not generate such configuration file, and exit with an error.

Additionally, there's been a small clean up of the clickhouse-admin code.

Manual testing

In a local omicron deployment first tell reconfigurator to deploy a clickhouse policy both with the default number of replicas and keepers.

root@oxz_switch:~# omdb nexus blueprints diff target d6a6c153-76aa-4933-98bd-1009d95f03d2
note: Nexus URL not specified.  Will pick one from DNS.
note: using DNS server for subnet fd00:1122:3344::/48
note: (if this is not right, use --dns-server to specify an alternate DNS server)
note: using Nexus URL http://[fd00:1122:3344:101::c]:12221
from: blueprint fb9d6881-3c8a-44e2-b9f3-b8222ebdae99
to:   blueprint d6a6c153-76aa-4933-98bd-1009d95f03d2

<...>

 CLICKHOUSE CLUSTER CONFIG:
+   generation:::::::::::::::::::::::::::::::::::::   2
+   max used server id:::::::::::::::::::::::::::::   3
+   max used keeper id:::::::::::::::::::::::::::::   5
+   cluster name:::::::::::::::::::::::::::::::::::   oximeter_cluster
+   cluster secret:::::::::::::::::::::::::::::::::   750a492f-1c3d-430c-8d18-c74596fd2ec8
+   highest seen keeper leader committed log index:   0

    clickhouse keepers at generation 2:
    ------------------------------------------------
    zone id                                keeper id
    ------------------------------------------------
+   13c665e9-d7bd-43a5-b780-47acf8326feb   1        
+   325e3ac5-6cc8-4aec-9ac0-ea8d9a60c40f   2        
+   37e41e42-3b0c-49a6-8403-99fe66e84897   3        
+   4e65bf56-c7d6-485d-9b7f-8513a55838f9   4        
+   8a5df7fa-8633-4bf9-a7fa-567d5e62ffbf   5        

    clickhouse servers at generation 2:
    ------------------------------------------------
    zone id                                server id
    ------------------------------------------------
+   45af8162-253a-494c-992e-137d2bd5f350   1        
+   676772e0-d0c4-425b-a0d1-f6df46e4d10c   2        
+   84d249d1-9c13-460a-9c7c-08a979471246   3    

We can see keepers and servers are at generation 2.

Now we zlogin into a keeper zone to check we have recorded that information and that the node has joined the quorum.

root@oxz_clickhouse_keeper_37e41e42:~# curl http://[fd00:1122:3344:101::23]:8888/generation                   
2
root@oxz_clickhouse_keeper_37e41e42:~# head -n 1 /opt/oxide/clickhouse_keeper/keeper_config.xml 
<!-- generation:2 -->
root@oxz_clickhouse_keeper_37e41e42:~# curl http://[fd00:1122:3344:101::23]:8888/4lw-lgif  
{"first_log_idx":1,"first_log_term":1,"last_log_idx":7123,"last_log_term":1,"last_committed_log_idx":7123,"leader_committed_log_idx":7123,"target_committed_log_idx":7123,"last_snapshot_idx":0}

We zlogin into a replica zone and check we have recorded that information, and the database contains the expected oximeter table and fields.

root@oxz_clickhouse_server_676772e0:~# curl http://[fd00:1122:3344:101::28]:8888/generation 
2
root@oxz_clickhouse_server_676772e0:~# head -n 1 /opt/oxide/clickhouse_server/config.d/replica-server-config.xml 
<!-- generation:2 -->
root@oxz_clickhouse_server_676772e0:~# /opt/oxide/clickhouse_server/clickhouse client --host fd00:1122:3344:101::28
ClickHouse client version 23.8.7.1.
Connecting to fd00:1122:3344:101::28:9000 as user default.
Connected to ClickHouse server version 23.8.7 revision 54465.

oximeter_cluster_2 :) show tables in oximeter

SHOW TABLES FROM oximeter

Query id: 1baa160b-3332-4fa4-a91d-0032fd917a96

┌─name─────────────────────────────┐
│ fields_bool                      │
│ fields_bool_local                │
│ fields_i16                       │
│ fields_i16_local                 │
│ <...>                            │
│ version                          │
└──────────────────────────────────┘

81 rows in set. Elapsed: 0.009 sec. 

No we want to force a new generation number, so we set a clickhouse policy with an additional server and keeper

root@oxz_switch:~# omdb nexus blueprints diff target a598ce1b-1413-47d6-bc8c-7b63b6d09158
note: Nexus URL not specified.  Will pick one from DNS.
note: using DNS server for subnet fd00:1122:3344::/48
note: (if this is not right, use --dns-server to specify an alternate DNS server)
note: using Nexus URL http://[fd00:1122:3344:101::c]:12221
from: blueprint d6a6c153-76aa-4933-98bd-1009d95f03d2
to:   blueprint a598ce1b-1413-47d6-bc8c-7b63b6d09158

<...>

 CLICKHOUSE CLUSTER CONFIG:
*   generation:::::::::::::::::::::::::::::::::::::   2 -> 3
*   max used server id:::::::::::::::::::::::::::::   3 -> 4
*   max used keeper id:::::::::::::::::::::::::::::   5 -> 6
    cluster name:::::::::::::::::::::::::::::::::::   oximeter_cluster (unchanged)
    cluster secret:::::::::::::::::::::::::::::::::   750a492f-1c3d-430c-8d18-c74596fd2ec8 (unchanged)
*   highest seen keeper leader committed log index:   0 -> 13409

    clickhouse keepers generation 2 -> 3:
    ------------------------------------------------
    zone id                                keeper id
    ------------------------------------------------
    13c665e9-d7bd-43a5-b780-47acf8326feb   1        
    325e3ac5-6cc8-4aec-9ac0-ea8d9a60c40f   2        
    37e41e42-3b0c-49a6-8403-99fe66e84897   3        
    4e65bf56-c7d6-485d-9b7f-8513a55838f9   4        
    8a5df7fa-8633-4bf9-a7fa-567d5e62ffbf   5        
+   ccb1b5cf-7ca8-4c78-b9bc-970d156e6109   6        

    clickhouse servers generation 2 -> 3:
    ------------------------------------------------
    zone id                                server id
    ------------------------------------------------
    45af8162-253a-494c-992e-137d2bd5f350   1        
+   497f4829-f3fe-4c94-86b2-dbd4e814cc90   4        
    676772e0-d0c4-425b-a0d1-f6df46e4d10c   2        
    84d249d1-9c13-460a-9c7c-08a979471246   3        

We deploy it and do the same checks on the same zones we checked previously and in the new zones

Old keeper zone:

root@oxz_clickhouse_keeper_37e41e42:~# curl http://[fd00:1122:3344:101::23]:8888/generation
3
root@oxz_clickhouse_keeper_37e41e42:~# head -n 1 /opt/oxide/clickhouse_keeper/keeper_config.xml 
<!-- generation:3 -->
root@oxz_clickhouse_keeper_37e41e42:~# curl http://[fd00:1122:3344:101::23]:8888/4lw-lgif
{"first_log_idx":1,"first_log_term":1,"last_log_idx":25198,"last_log_term":1,"last_committed_log_idx":25198,"leader_committed_log_idx":25198,"target_committed_log_idx":25198,"last_snapshot_idx":0}

New keeper zone:

root@oxz_clickhouse_keeper_ccb1b5cf:~# curl http://[fd00:1122:3344:101::29]:8888/generation
3
root@oxz_clickhouse_keeper_ccb1b5cf:~# head -n 1 /opt/oxide/clickhouse_keeper/keeper_config.xml 
<!-- generation:3 -->
root@oxz_clickhouse_keeper_ccb1b5cf:~# curl http://[fd00:1122:3344:101::29]:8888/4lw-lgif   
{"first_log_idx":1,"first_log_term":1,"last_log_idx":35857,"last_log_term":1,"last_committed_log_idx":35853,"leader_committed_log_idx":35853,"target_committed_log_idx":35853,"last_snapshot_idx":0}

Old replica zone:

root@oxz_clickhouse_server_676772e0:~# curl http://[fd00:1122:3344:101::28]:8888/generation
3
root@oxz_clickhouse_server_676772e0:~# head -n 1 /opt/oxide/clickhouse_server/config.d/replica-server-config.xml 
<!-- generation:3 -->
root@oxz_clickhouse_server_676772e0:~# /opt/oxide/clickhouse_server/clickhouse client --host fd00:1122:3344:101::28
ClickHouse client version 23.8.7.1.
Connecting to fd00:1122:3344:101::28:9000 as user default.
Connected to ClickHouse server version 23.8.7 revision 54465.

oximeter_cluster_2 :) show tables in oximeter

SHOW TABLES FROM oximeter

Query id: d4500915-d5b5-452f-a404-35e1e172b8f8

┌─name─────────────────────────────┐
│ fields_bool                      │
│ fields_bool_local                │
│ fields_i16                       │
│ fields_i16_local                 │
│ <...>                            │
│ version                          │
└──────────────────────────────────┘

81 rows in set. Elapsed: 0.002 sec. 

New replica zone:

root@oxz_clickhouse_server_497f4829:~# curl http://[fd00:1122:3344:101::2a]:8888/generation
3
root@oxz_clickhouse_server_497f4829:~# head -n 1 /opt/oxide/clickhouse_server/config.d/replica-server-config.xml 
<!-- generation:3 -->
root@oxz_clickhouse_server_497f4829:~# /opt/oxide/clickhouse_server/clickhouse client --host fd00:1122:3344:101::2a
ClickHouse client version 23.8.7.1.
Connecting to fd00:1122:3344:101::2a:9000 as user default.
Connected to ClickHouse server version 23.8.7 revision 54465.

oximeter_cluster_4 :) show tables in oximeter

SHOW TABLES FROM oximeter

Query id: 9e02b839-e938-44ef-8b2e-a61d0b8c25af

┌─name─────────────────────────────┐
│ fields_bool                      │
│ fields_bool_local                │
│ fields_i16                       │
│ fields_i16_local                 │
│ <...>                            │
│ version                          │
└──────────────────────────────────┘

81 rows in set. Elapsed: 0.014 sec. 

To verify clickhouse-admin exits with an error if the incoming generation number is lower than the current one, I tested by runing clickhouse-admin against a local clickward deployment:

# clickhouse-admin-server

karcar@ixchel:~/src/omicron$ curl http://[::1]:8888/generation
34
karcar@ixchel:~/src/omicron$ curl --header "Content-Type: application/json" --request PUT "http://[::1]:8888/config" -d '
> {
>     "generation": 3,
>     "settings": {
>         "config_dir": "/tmp/ch-dir/",
>         "id": 1,
>         "datastore_path": "/tmp/ch-dir/",
>         "listen_addr": "::1",
>         "keepers": [{"ipv6": "::1"}],
>         "remote_servers": [{"ipv6": "::1"}]
>     }
> }'
{
  "request_id": "01809997-b9da-4e9c-837f-11413a6254b7",
  "error_code": "Internal",
  "message": "Internal Server Error"
}

# From the logs

{"msg":"request completed","v":0,"name":"clickhouse-admin-server","level":30,"time":"2025-01-21T01:08:24.946465Z","hostname":"ixchel","pid":58943,"uri":"/config","method":"PUT","req_id":"01809997-b9da-4e9c-837f-11413a6254b7","remote_addr":"[::1]:54628","local_addr":"[::1]:8888","component":"dropshot","file":"/Users/karcar/.cargo/registry/src/index.crates.io-6f17d22bba15001f/dropshot-0.13.0/src/server.rs:851","error_message_external":"Internal Server Error","error_message_internal":"current generation is greater than incoming generation","latency_us":227,"response_code":"500"}

# clickhouse-admin-keeper

karcar@ixchel:~/src/omicron$ curl http://[::1]:8888/generation
23
karcar@ixchel:~/src/omicron$ curl --header "Content-Type: application/json" --request PUT "http://[::1]:8888/config" -d '
{
    "generation": 2,
    "settings": {
        "config_dir": "/tmp/ch-dir/",
        "id": 1,
        "datastore_path": "/tmp/ch-dir/",
        "listen_addr": "::1",
        "raft_servers": [
            {
                "id": 1,
                "host": {"ipv6": "::1"}
            }
        ]
    }
}'
{
  "request_id": "e6b66ca9-10fa-421b-ac46-0e470d8e5512",
  "error_code": "Internal",
  "message": "Internal Server Error"

# From the logs

{"msg":"request completed","v":0,"name":"clickhouse-admin-keeper","level":30,"time":"2025-01-21T02:28:12.925343Z","hostname":"ixchel","pid":59371,"uri":"/config","method":"PUT","req_id":"e6b66ca9-10fa-421b-ac46-0e470d8e5512","remote_addr":"[::1]:64494","local_addr":"[::1]:8888","component":"dropshot","file":"/Users/karcar/.cargo/registry/src/index.crates.io-6f17d22bba15001f/dropshot-0.13.0/src/server.rs:851","error_message_external":"Internal Server Error","error_message_internal":"current generation is greater than incoming generation","latency_us":180,"response_code":"500"}

Closes: #7137

Comment on lines +96 to +98
log: &Logger,
) -> Self {
let log = log.new(slog::o!("component" => "ClickhouseCli"));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is part of the refactoring, the logs were a bit of a mess.

Comment on lines +1680 to +1686
let clickhouse_server_config =
PropertyGroupBuilder::new("config")
.add_property(
"config_path",
"astring",
format!("{CLICKHOUSE_SERVER_CONFIG_DIR}/{CLICKHOUSE_SERVER_CONFIG_FILE}"),
);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also part of the refactoring. Let's use the constants we are using for the configuration files in the SMF service as well, so we don't have to hardcode things into an SMF method script.

Comment on lines +32 to +38
pub fn new(
log: &Logger,
binary_path: Utf8PathBuf,
listen_address: SocketAddrV6,
) -> Result<Self> {
let clickhouse_cli =
ClickhouseCli::new(binary_path, listen_address, log);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refactor as well, there was no need to pass clickhouse_cli as a parameter, but not clickward etc.

@karencfv karencfv marked this pull request as ready for review January 21, 2025 02:44
@karencfv karencfv requested a review from andrewjstone January 21, 2025 02:44
Copy link
Contributor

@andrewjstone andrewjstone left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great stuff @karencfv!

// If there is already a configuration file with a generation number we'll
// use that. Otherwise, we set the generation number to None.
let gen = read_generation_from_file(config_path)?;
let generation = Mutex::new(gen);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's become practice at Oxide to avoid tokio mutexes wherever possible as they have significant problems when cancelled and generally just don't do what we want. I realize there's already some usage here with regards to initialization. We don't have to fix that in this PR, but we should avoid adding new uses. We should instead use a std::sync::mutex. I left a comment below about this as well.

See the following for more details:
https://rfd.shared.oxide.computer/rfd/0400#no_mutex
https://rfd.shared.oxide.computer/rfd/0397#_example_with_mutexes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol I was definitely on the fence on that one, I went for consistency in the end be1afc7#diff-c816600501b7aaa7de4a2eb9dc86498662030cea6390fa23e11a22c990efb510L28-L29

Thanks for the links! Hadn't seen those RFDs, will read them both

@@ -36,6 +60,10 @@ impl KeeperServerContext {
pub fn log(&self) -> &Logger {
&self.log
}

pub async fn generation(&self) -> Option<Generation> {
*self.generation.lock().await
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We only need read access here, and so we can easily avoid an async mutex here. Generation is also Copy, so this is cheap. I'd suggest making this a synchronous function and calling *self.generation.lock() instead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I was wrong here. I wasn't considering the usage of the generation with regards to concurrent requests.

}

pub fn initialization_lock(&self) -> Arc<Mutex<()>> {
self.initialization_lock.clone()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this usage of a tokio lock is safe or not due to cancellation. It looks like it aligns with the exact usage we have in our ServerContext. I also don't have an easy workaround for this right now, and so I guess I'm fine leaving this in to keep moving.

@sunshowers @jgallagher Do you have any ideas here?

Copy link
Contributor

@jgallagher jgallagher Jan 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Various thoughts; sorry if some of this is obvious, but I don't have much context here so am just hopping in:

  • Cloning an Arc<tokio::Mutex<_>> is fine (the clone is fully at the Arc layer)
  • ... that said I don't think we need to clone here? Returning &Mutex<()> looks like it'd be okay.
  • Mutex<()> is kinda fishy and probably worthy of a comment, since typically the mutex is protecting some data. (Maybe there is one somewhere that I'm not seeing!)
  • It looks like the use of this is to prevent the /init_db endpoint from running concurrently? That is definitely not cancel safe. If dropshot were configured to cancel handlers on client disconnect, a client could start an /init_db, drop the request (unlocking the mutex), then start it again while the first one was still running.

On the last point: I think this is "fine" as long as dropshot is configured correctly (i.e., to not cancel handlers). If we wanted this to be correct even under cancellation, I'd probably move the init process into a separate tokio task and manage that either with channels or a sync mutex. Happy to expand on those ideas if it'd be helpful.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the input!

Mutex<()> is kinda fishy and probably worthy of a comment, since typically the mutex is protecting some data. (Maybe there is one somewhere that I'm not seeing!)

Tbh, I'm just moving code around that was already here. I'm not really sure what the intention was initially.

On the last point: I think this is "fine" as long as dropshot is configured correctly (i.e., to not cancel handlers). If we wanted this to be correct even under cancellation, I'd probably move the init process into a separate tokio task and manage that either with channels or a sync mutex.

That sounds like a good idea regardless of what the initial intention was. Do you mind expanding a little on those ideas? It'd definitely be helpful

Copy link
Contributor

@jgallagher jgallagher Jan 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure thing! One pattern we've used in a bunch places is to spawn a long-lived tokio task and then communicate with it via channels. This looks something like (untested and lots of details omitted):

// kinds of things we can ask the task to do
enum Request {
    DoSomeThing {
        // any inputs from us the task needs
        data: DataNeededToDoSomeThing,
        // a oneshot channel the task uses to send us the result of our request
        response: oneshot::Sender<ResultOfSomeThing>,
    },
}

// the long-lived task: loop over incoming requests and handle them
fn long_running_task(incoming: Receiver<Request>) {
    // run until the sending half of `incoming` is dropped
    while let Some(request) = incoming.recv().await {
        match request {
            Request::DoSomeThing { data, response } => {
                let result = do_some_thing(data);
                response.send(response);
            }
        }
    }
}

// our main code: one time up front, create the channel we use to talk to the inner task and spawn that task
let (inner_tx, inner_rx) = mpsc::channel(N); // picking N here can be hard
let join_handle = tokio::spawn(long_running_task(inner_rx));

// ... somewhere else, when we want the task to do something for us ...
let (response_tx, response_rx) = oneshot::channel();
inner_tx.send(Request::DoSomeThing { data, response_tx });
let result = response_rx.await;

A real example of this pattern (albeit more complex; I'm not finding any super simple ones at the moment) is in the bootstrap agent: here's where we spawn the inner task. It has a couple different channels for incoming requests, so its run loop is a tokio::select over those channels but is otherwise pretty similar to the outline above.

This pattern is nice because regardless of how many concurrent callers try to send messages to the inner task, it itself can do things serially. In my pseudocode above, if the ... somewhere else bit is an HTTP handler, even if we get a dozen concurrent requests, the inner task will process them one at a time because it's forcing serialization via the channel it's receiving on.

I really like this pattern. But it has some problems:

  • Picking the channel depth is hard. Whatever N we pick, that means up to that many callers can be waiting in line. Sometimes we don't want that at all, but tokio's mpsc channels don't allow N=0. (There are other channel implementations that do if we decide we need this.)
  • If we just use inner_tx.send(_) as in my pseudocode, even if the channel is full, that will just block until there's room, so we actually have an infinite line. This can be avoided via try_send instead, which allows us to bubble out some kind of "we're too busy for more requests" backpressure to our caller.
  • If do_some_thing() is slow, this can all compound and make everybody slow.
  • If do_some_thing() hangs, then everybody trying to send requests to the inner task hangs too. (This recently happened to us in sled-agent!)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A "build your own" variant of the above in the case where you want at most one instance of some operation is to use a sync::Mutex around a tokio task join handle. This would look something like (again untested, details omitted):

// one time up front, create a sync mutex around an optional tokio task join handle
let task_lock = sync::Mutex::new(None);

// ... somewhere else, where we want to do work ...

// acquire the lock
let mut task_lock = task_lock.lock().unwrap();

// if there's a previous task running, is it still running?
let still_running = match task_lock.as_ref() {
    Some(joinhandle) => !joinhandle.is_finished(),
    None => false,
};
if still_running {
    // return a "we're busy" error
}

// any previous task is done; start a new one
*task_lock = Some(tokio::spawn(do_some_work()));

This has its own problems; the biggest one is that we can't wait for the result of do_some_work() while holding the lock, so this really only works for background stuff that either doesn't need to return results at all, or the caller is in a position to poll us for completion at some point in the future. (In the joinhandle.is_finished() case, we can .await it to get the result of do_some_work().)

We don't use this pattern as much. One example is in installinator, where we do want to get the result of previously-completed tasks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the write up John. I think, overall, it's probably simpler to have a long running task and issue requests that way. However, as you mentioned this has its own problems. However, we know what those problems are and we use this pattern all over sled agent.

In this case we can constraint the problem such that we only want to handle one in flight request at a time, since reconfigurator execution will retry again later anyway. I'd suggest using a flume bounded channel with a size of 0 to act as a rendezvous channel. That should give the behavior we want. We could have separate tasks for performing initialization and config writing so we don't have one block out the other.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

excellent! Thanks a bunch for the write up!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could have separate tasks for performing initialization and config writing so we don't have one block out the other.

@andrewjstone , do we really not want them to block out each other? It'd be problematic to have the db init job trying to run when the generate config one hasn't finished and vice versa no?

// file generation.
if let Some(current) = current_generation {
if current > incoming_generation {
return Err(HttpError::for_internal_error(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't feel like an internal error to me. This is an expected race condition, and so I think we should return a 400 level error instead of a 500 level error. I think 412 is an appropriate error code, even though we are not using etags for a precondition. @davepacheco does that make sense to you?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely agreed it's not a 500. It looks like Sled Agent uses 409 (Conflict) for this and I'd suggest using that for consistency.

// Absolutely refuse to downgrade the configuration.
if ledger_zone_config.omicron_generation > request.generation {
return Err(Error::RequestedConfigOutdated {
requested: request.generation,
current: ledger_zone_config.omicron_generation,
});
}

Error::RequestedConfigOutdated { .. } => {
omicron_common::api::external::Error::conflict(&err.to_string())
}

// file generation.
if let Some(current) = current_generation {
if current > incoming_generation {
return Err(HttpError::for_internal_error(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same thing as above. I think this should be a 400-level error.


// We want to update the generation number only if the config file has been
// generated successfully.
*ctx.generation.lock().await = Some(incoming_generation);
Copy link
Contributor

@jgallagher jgallagher Jan 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a TOCTOU problem here, in that ctx.generation could have changed between when we checked it above and when we reacquire the lock here to set it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I guess that depends on how reconfigurator works? How often is the generation changing?

I decided to update the generation number once the config file had been successfully generated, because if it hadn't, then the zone wouldn't be fully in that generation. Do you think it makes more sense to update the generation immediately?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'd consider this outside the context of reconfigurator. If this endpoint is called multiple times concurrently with different incoming generations, does it behave correctly? That way we don't have an implicit dependency between the correctness of this endpoint and the behavior or timing of reconfigurator.

Sorry for the dumb questions, but - is it safe for two instances of generate_server_config() to be running concurrently? I think that has implications on what we need to do with the lock on generation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'd consider this outside the context of reconfigurator. If this endpoint is called multiple times concurrently with different incoming generations, does it behave correctly?

I guess there could be an error if two generate_server_config()s with different generation numbers are running
and they both read an initial value for generation, but one with the lower number manages to write after the one with the higher one.

Thanks for the input! I guess that settles it, I'll update the number immediately after reading. I was on the fence about this one anyway. Even if the config is borked, it'll be borked in that generation

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I'm not sure that's enough. We may need to write the config file while holding the lock too, I think?

Imagine we're currently on gen 1 and we get two concurrent requests, one that gives us gen 2 and one that gives us gen 3. If our code is something like:

{
    let gen = acquire_generation_lock().await;

    if *gen > incoming_generation {
        return an error;
    }

    *gen = incoming_generation;
} // release `gen` lock

write_new_config_file();

then one possible ordering is:

  • The request for gen 2 acquires the lock. We're currently on gen 1, so this is fine. We update to gen=2 and release the lock. Then we get parked for some reason.
  • The request for gen 3 acquires the lock. We're currently on gen 2, so this is fine. We update to gen=3 and release the lock. We write our config file.
  • The gen 2 request gets unparked. It writes its config file.

Then at this point we think we're on gen=3 but the config file on disk is the one from gen=2.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed answer!

Hm, I'm not sure that's enough. We may need to write the config file while holding the lock too, I think?

Yep, that makes total sense

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, you are right @jgallagher. These requests all need to be serialized. (I know you are currently writing up some options, just wanted to drop a note).

@karencfv
Copy link
Contributor Author

Thanks for the reviews everyone! I'm not finished here, but leaving it for today.
I've updated a couple of endpoints just to try out the new pattern and it seems to be working fine. I just need to move the init_db() functionality to the task and do a bit of clean up, but generally this is the direction I'm taking.

Copy link
Contributor Author

@karencfv karencfv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I've addressed all of the comments, let me know if there's something I'm missing!

I've run all the manual tests I did before and received the same results as before.

Comment on lines +124 to +194
pub fn generate_config_and_enable_svc(
&self,
replica_settings: ServerConfigurableSettings,
) -> Result<ReplicaConfig, HttpError> {
let mut current_generation = self.generation.lock().unwrap();
let incoming_generation = replica_settings.generation();

// If the incoming generation number is lower, then we have a problem.
// We should return an error instead of silently skipping the configuration
// file generation.
if let Some(current) = *current_generation {
if current > incoming_generation {
return Err(HttpError::for_client_error(
Some(String::from("Conflict")),
StatusCode::CONFLICT,
format!(
"current generation '{}' is greater than incoming generation '{}'",
current,
incoming_generation,
)
));
}
};

let output =
self.clickward().generate_server_config(replica_settings)?;

// We want to update the generation number only if the config file has been
// generated successfully.
*current_generation = Some(incoming_generation);

// Once we have generated the client we can safely enable the clickhouse_server service
let fmri = "svc:/oxide/clickhouse_server:default".to_string();
Svcadm::enable_service(fmri)?;

Ok(output)
}

pub async fn init_db(&self) -> Result<(), HttpError> {
let log = self.log();
// Initialize the database only if it was not previously initialized.
// TODO: Migrate schema to newer version without wiping data.
let client = self.oximeter_client();
let version = client.read_latest_version().await.map_err(|e| {
HttpError::for_internal_error(format!(
"can't read ClickHouse version: {e}",
))
})?;
if version == 0 {
info!(
log,
"initializing replicated ClickHouse cluster to version {OXIMETER_VERSION}"
);
let replicated = true;
self.oximeter_client()
.initialize_db_with_version(replicated, OXIMETER_VERSION)
.await
.map_err(|e| {
HttpError::for_internal_error(format!(
"can't initialize replicated ClickHouse cluster \
to version {OXIMETER_VERSION}: {e}",
))
})?;
} else {
info!(
log,
"skipping initialization of replicated ClickHouse cluster at version {version}"
);
}

Ok(())
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a mechanical change, moving most of the functionality from context.rs to here so we can these from long_running_ch_server_task

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

clickhouse-admin: Reject old configurations
4 participants