-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[v1] flow-collector: add source ip to site-clients process name #1857
base: main
Are you sure you want to change the base?
[v1] flow-collector: add source ip to site-clients process name #1857
Conversation
Tested with 2 external vm-clients & 2 external vm-servers. This change resulted in the following (total flow) metrics being reported to prometheus.
With this change, the Processes tab in the skupper-console shows: 2 site-clients (1 for each client IP) and 1 site-servers. The Topology / Components page in the skupper-console shows 1 bubble for 'site-clients' and 1 bubble for 'site-servers'. Also, when I look at the site-servers process, I notice that only 1 address is listed.
|
f4ea845
to
05e6659
Compare
This results in flow metrics being generated for each external client-vm ip. Fixes skupperproject#1859
05e6659
to
e4c1b41
Compare
@Karen-Schoener For the expected 2 addresses for the site-servers process record, it appears that the LIST api method only appends 1 address per process record (the address from the associated connector record, a 1:1 relationship, even though 2 connectors send traffic to the same process). So I think this is just the existing behavior, not a result of your change (unless i am reading this code wrong). flow_mem_driver.go ~ln 1810 |
@Karen-Schoener Would you mind sharing how you set up your services to test this? The external servers and clients are off the beaten path enough that I can't recall how to get it all wired together. |
@c-kruse I documented the steps in this ticket: #1859 (comment) If there are any questions on the steps, please let me know... Thanks, Karen |
Thank you @Karen-Schoener! I think this mostly is making sense and looks like it works - on my first try I can't seem to get more than one site-client going, but I suspect I just need to further fiddle with my setup (kind and/or minikube + the docker "driver" seem to observe external traffic as originating from some gateway address 10.244.0.1 for me.) I probably need to use something VM or bare metal based with a bridge network instead? |
@c-kruse So far, I've tested with 2 VM clients and 2 VM servers. I'm testing on a Windows PC that is running Virtualbox VMs. One of the VMs is running minikube / skupper. In my test environment, the external traffic is seen as originating from the VM-client IP addresses. |
@c-kruse I have also noticed that the client IP is SNAT'd for certain kube distributions (docker seems to be the usual suspect, k3d in particular stands out in my memory). I use microk8s as a kube distro that doesn't NAT the source ip. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @Karen-Schoener and @scwhaley. Looks solid to me! Adding @bartoval, because he has a sharper eye for the console side of things than I do.
FWIW, I never did manage to produce a cluster that was not doing SNAT on incoming packets - even with microk8s. Instead, I was just hitting the skupper service from pods in another namespace which has the same effect as close as I can reason.
Ran tests on my end, this seems to be working as expected. Approved! Thanks @Karen-Schoener for the great work! |
This results in flow metrics being generated for each external client-vm ip.