-
-
Notifications
You must be signed in to change notification settings - Fork 547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose NetID and/or a DeploymentID #532
Comments
TTN Mapper is starting to support multiple networks. The feature will allow end users to view different networks' coverage as different layers on the map. See https://twitter.com/ttnmapper/status/1374275806969667585?s=20. Exactly which networks will be shown on the public website, and which will be behind a login, is still under consideration. The main thing to take from the above is that networks with unique coverage areas need to be identified. We can easily distinguish between a Things Stack network and a ChirpStack network, as their json formats are different, and therefore will use different api endpoints for the webhooks. Distinguishing between different ChirpStack instances is very difficult, or actually impossible. The most instances will use the experimental NetID block ( A solution to this is to generate a UUID when the ChirpStack instance is started up the first time, persisting it, and using that as the NSID/DeploymentID. The For 3rd party systems like TTN Mapper to know which network the data originates from, we need to pass the More things to keep in mind:
This same issue is discussed for the TTI stack here: TheThingsNetwork/lorawan-stack#4076 |
@brocaar any comments on this? |
Yes, I can see the use-case of at least exposing the NetID field for now in the uplink messages. I'm a bit hesitant to add a Deployment ID at this point as I'm planning to make some other changes which potentially might overlap with this. I want to avoid that we add something now, which I will then remove again in the near future. @jpmeijers would an unique value per organization work or should it be server wide. Note that it is possible to make gateways private, so the coverage might be per organization, not per server-instance.
@jpmeijers do you have a reference to this? Note, https://github.com/brocaar/chirpstack-api/blob/829ff1994dd6c3a2c69a01462c8e8986ef1bdbda/rust/proto/chirpstack-api/gw/gw.proto#L175 might not be the best place for this. the I believe this is a better place for this: Then the same info can be exposed in the uplink integration message. |
Only @johanstokking's comment on TheThingsNetwork/lorawan-stack#4076, second last paragraph, point 2:
Yes, very good point. On the Mapper's side it should be possible to merge multiple server's coverage into an organisation's coverage. But I can't split an organisation's coverage into their separate network instances. So tagging/identifying a server instance is more versatile than identifying the organisation. "Private" gateways will still contribute coverage areas (heatmap), but will not have markers indicating their locations on the map. Remember here that private networks (all ChirpStack networks) will not be shown publicly on TTN Mapper. Users will either get a unique URL, or need to sign in. This is still a work in progress on the Mapper's side.
Fair enough. If we add only the Evolution of identifying the ChirpStack coverage on TTN Mapper's side:
|
Coming back to this issue now, because I'm facing this again. A big ChirpStack user has one/two instances running. Per instance there are multiple tenants. The data that the Mapper receives from ChirpStack identifies the Tenant, but there is no way to identify the server instance/organisation. In this specific case tenants share coverage, so tagging coverage by the tenant is not ideal. I'd rather want to tag the coverage by the NetID or server instance. Or a combination of the Tenant and Server/NetID. What is the likelyhood of exposing the NetID in the Up event? Or alternatively a UUID of the server instance, similar to the Tenant ID. Update 1For clarity, coverage is shared between Tenants. See docs. Update 2We had the idea to use the
That means we can not uniquely identify a network based on the DevAddr. |
Summary
What is the use-case?
Identifying networks across services. For example enabling a ttn-mapper integration as discussed with @jpmeijers
Implementation description
We could add it to every frame forwarded to the application-server.
https://github.com/brocaar/chirpstack-api/blob/829ff1994dd6c3a2c69a01462c8e8986ef1bdbda/rust/proto/chirpstack-api/gw/gw.proto#L175
Frames coming from the gateway would be missing the property, but on processing/forwarding the frame could get the NetID of the ns handling it, a bit like the roaming code does it.
In any case, as most (I assume) are using the default config. I would be useful to have an "UUID" identifying the NS Deployment.
This UUID can be generated on first start, like the admin-user in the AS and be unique to that deployment. We would then have the NetID and a DeploymentID. Changing this "DeploymentID" should not have any impact on the processing of frames/routing.
Can you implement this by yourself and make a pull request?
Probably yes.
The text was updated successfully, but these errors were encountered: