Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hook error when MetalLB does not have available IP addresses #55

Closed
patriciareinoso opened this issue Nov 15, 2023 · 2 comments · Fixed by #56
Closed

Hook error when MetalLB does not have available IP addresses #55

patriciareinoso opened this issue Nov 15, 2023 · 2 comments · Fixed by #56

Comments

@patriciareinoso
Copy link
Contributor

Describe the bug
We are getting a hook failed error when relating UPF to NMS (n4 interface) whenever all the metallb IP addresses has been assigned.

Expected behavior
Charm should go to blocked status.

Logs

Unit                         Workload  Agent  Address       Ports  Message
amf/0*                       active    idle   10.1.110.186         
ausf/0*                      active    idle   10.1.110.190         
gnbsim/0*                    waiting   idle   10.1.110.184         Waiting for N2 information
grafana-agent-k8s/0*         blocked   idle   10.1.110.132         grafana-cloud-config: off, logging-consumer: off
mongodb-k8s/0*               active    idle   10.1.110.142         Primary
nrf/0*                       active    idle   10.1.110.149         
nssf/0*                      active    idle   10.1.110.155         
pcf/0*                       active    idle   10.1.110.156         
router/0*                    active    idle   10.1.110.183         
sdcore-nms/0*                blocked   idle   10.1.110.187         Waiting for `sdcore-management` relation to be created
self-signed-certificates/0*  active    idle   10.1.110.140         
smf/0*                       active    idle   10.1.110.163         
udm/0*                       active    idle   10.1.110.166         
udr/0*                       active    idle   10.1.110.172         
upf/0*                       error     idle   10.1.110.177         hook failed: "fiveg_n4-relation-joined"
unit-upf-0: 16:20:42 ERROR unit.upf/0.juju-log fiveg_n4:28: Uncaught exception while in charm code:
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-upf-0/charm/./src/charm.py", line 980, in <module>
    main(UPFOperatorCharm)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/main.py", line 436, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/main.py", line 144, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 340, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 842, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 931, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-upf-0/charm/lib/charms/sdcore_upf/v0/fiveg_n4.py", line 231, in _on_relation_joined
    self.on.fiveg_n4_request.emit(relation_id=event.relation.id)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 340, in emit
    framework._emit(event)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 842, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-upf-0/charm/venv/ops/framework.py", line 931, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-upf-0/charm/./src/charm.py", line 252, in _on_fiveg_n4_request
    self._update_fiveg_n4_relation_data()
  File "/var/lib/juju/agents/unit-upf-0/charm/./src/charm.py", line 276, in _update_fiveg_n4_relation_data
    upf_hostname=self._get_n4_upf_hostname(),
  File "/var/lib/juju/agents/unit-upf-0/charm/./src/charm.py", line 293, in _get_n4_upf_hostname
    elif lb_hostname := self._upf_load_balancer_service_hostname():
  File "/var/lib/juju/agents/unit-upf-0/charm/./src/charm.py", line 751, in _upf_load_balancer_service_hostname
    return service.status.loadBalancer.ingress[0].hostname  # type: ignore[attr-defined]
TypeError: 'NoneType' object is not subscriptable

The EXTERNAL-IP for the upf-external service stays as pending.

$ kubectl -n core2 get svc

upf-external                         LoadBalancer   10.152.183.188   <pending>     8805:30196/UDP                29m

How to reproduce
Disable metallb should be enough (?). Or make sure that all the IPs on its range has already been assigned.
Deploy SD Core
Deploy NMS
juju integrate sdcore-nms:sdcore-management sdcore-webui:sdcore-management

Environment
Juju version: 3.1.6
Cloud Environment: MicroK8s
Kubernetes version: v1.27.6

@patriciareinoso
Copy link
Contributor Author

Seem to be the same behavior as canonical/sdcore-amf-k8s-operator#45

@dariofaccin
Copy link
Contributor

The expected behaviour is that the charm should use the internal UPF hostname whenever external hostname from MetalLB is not available. It already does that when MetalLB is enabled but without hostname; it does not handle the case in which MetalLB is not enabled at all.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants