Skip to content
This repository has been archived by the owner on Jul 24, 2024. It is now read-only.

Support deployment on plain Kubernetes #6200

Closed
balejos opened this issue Jul 18, 2019 · 29 comments · Fixed by #8697
Closed

Support deployment on plain Kubernetes #6200

balejos opened this issue Jul 18, 2019 · 29 comments · Fixed by #8697
Assignees
Labels
cat/discussion This issues requires a discussion status/never-stale Marker that this issue should not be marked as stale

Comments

@balejos
Copy link

balejos commented Jul 18, 2019

Coming from a retro on decreasing the complexity of bringing up the dev environment, and also mentioned in Planning syndesis 2.0
We identified that working towards installing on plain Kubernetes, we would discover the assumptions we made and attract community deployment.

ToDo & Considerations:

  • Switch the runtime to Camel-K (which can run on on plain K)
  • Replace s2i (option)?
  • SSO with OpenShift needs an alternative
  • Install Operator
  • Fix override of deployment env vars, eg. DEBUG
  • Route conversion to Ingress
  • Migration of infrastructure resources
@pure-bot pure-bot bot added the notif/triage The issue needs triage. Applied automatically to all new issues. label Jul 18, 2019
@balejos balejos added cat/discussion This issues requires a discussion and removed notif/triage The issue needs triage. Applied automatically to all new issues. labels Jul 18, 2019
@balejos balejos added this to the Sprint 49 milestone Jul 18, 2019
@balejos
Copy link
Author

balejos commented Jul 18, 2019

related to #3826 and #6556

@heiko-braun heiko-braun removed this from the Sprint 49 milestone Jul 29, 2019
@KurtStam
Copy link
Contributor

Some things come to mind that need to be looked at to make this happen:

  1. Switch the runtime to Camel-K (which can run on on plain K)
  2. Replace s2i (option)?
  3. SSO with OpenShift needs an alternative
  4. OpenShift Templates
  5. ?

@lgarciaaco
Copy link
Contributor

lgarciaaco commented Sep 13, 2019

We cannot use routes, imagestreams and deploymentconfigs ... they are all openshift objects

@zregvart
Copy link
Member

Camel-K removes the need for Syndesis to perform the S2I build, it has strategies in place to perform the build itself either via S2I or via Kaniko.

@stale
Copy link

stale bot commented Mar 3, 2020

This issue has been automatically marked as stale because it has not had any activity since 90 days. It will be closed if no further activity occurs within 7 days. Thank you for your contributions!

@stale stale bot added the status/stale Issue considered to be stale so that it can be closed soon label Mar 3, 2020
@zregvart zregvart added the status/never-stale Marker that this issue should not be marked as stale label Mar 3, 2020
@stale stale bot removed the status/stale Issue considered to be stale so that it can be closed soon label Mar 3, 2020
@phantomjinx phantomjinx self-assigned this Mar 4, 2020
@phantomjinx
Copy link
Contributor

Trying out https://microk8s.io/#get-started

@phantomjinx
Copy link
Contributor

Need to define "plain Kubernetes".

  • Kubernetes is a framework not a product;
  • The products are defined as distros with Kubernetes being the kernel;
  • Openshift / OKD is one such distro that provides important extra elements for making a more user-friendly environment;
  • microk8s is an alternative kubernetes distro;

Therefore what is the objective?

  1. Remove / migrate from Openshift-specific structural elements used in Syndesis to allow for a more-run-anywhere app?
  2. Allow / test installation & running on Syndesis on a number of different Kubernetes platforms to maximise community participation?
  3. Where an Openshift feature is considered essential, provide an alternative configuration for a Kubernetes install while retaining the Openshift feature, ie. maintenance of multiple installable configurations?

@zregvart
Copy link
Member

zregvart commented Mar 5, 2020

  • Remove / migrate from Openshift-specific structural elements used in Syndesis to allow for a more-run-anywhere app?

OpenShift specific objects like DeploymentConfig and Route and the way we utilize S2I build make Syndesis non portable across any other Kubernetes distribution. I'd start with having a way to install and run Syndesis on Kubernetes. Defaulting to Camel K for running integrations will give us portability, as it supports both plain Kubernetes and OpenShift.

  • Allow / test installation & running on Syndesis on a number of different Kubernetes platforms to maximise community participation?

I'd focus on one, minikube is probably the one used most as a developer platform (similar to minishift/crc), running on minikube should be a representative common ground for any Kubernetes. I don't mind giving microk8s a try, but I think we should not spread ourselfs too thinly.

  • Where an Openshift feature is considered essential, provide an alternative configuration for a Kubernetes install while retaining the Openshift feature, ie. maintenance of multiple installable configurations?

The approach Camel K took is to have support for both OpenShift and Kubernetes, I think that makes sense. Though I don't think we need to depend on OpenShift specifics too much even when running on OpenShift. What we have, for example, with DeploymentConfig is caused either us not realizing the Kubernetes alternative (Deployment) or not having that ability at the time we started.

@phantomjinx
Copy link
Contributor

Gist for guidelines on converting DeploymentConfig to Deployment:
https://gist.github.com/bmaupin/d5be3ca882345ff92e8336698230dae0

@phantomjinx
Copy link
Contributor

Interesting issue/discussion of possibility of oc conversion of Deployment/DeploymentConfig (sadly stale atm)
openshift/origin#16763

@phantomjinx
Copy link
Contributor

Creating an ingress resource ->
https://blog.openshift.com/kubernetes-ingress-vs-openshift-route/

@phantomjinx
Copy link
Contributor

phantomjinx commented Mar 9, 2020

Progress with research links

  1. Configured ability to build operator image into docker registry;
  2. Understood that local docker registry is independant of kubernetes registry and s2i build provided in syndesis build scripts builds and adds the image to the openshift registry. This does not happen with kubernetes;
  1. Encountered error concerning locahost defaulting to ipv6 ::1 ip address - results in hang on 'docker push'
  • Changed all references to localost to 127.0.0.1 and retried -> docker push succeeded
  • Once in, microk8 registry failed to find image when install of syndesis-operator executed syndesis-operator install operator --image 127.0.0.1:32000/syndesis-operator --tag latest
  • Bug report that shows containerd problem in microk8 implementation and found that the file in question also refers to 'localhost' rather than '127.0.0.1' (upgrade to 1.14 seems to have broken my access to private registries canonical/microk8s#384 (comment))
  • Modified file and restarted microk8 and operator pod successful

Note:
The built-in registry is NOT the same as the image cache available via microk8.ctr images. So
just because am image was pushed to 127.0.0.1:32000 doesn't mean it will have appeared in the
image cache until it is actually used.

  1. The operator has 2 distinct switch points available for custom image/tag combinations
    a) When building the operator we can change the default image/tag combination
    b) When running the operator we can change override the default image/tag combination

@lburgazzoli
Copy link
Collaborator

Gist for guidelines on converting DeploymentConfig to Deployment:
https://gist.github.com/bmaupin/d5be3ca882345ff92e8336698230dae0

Once syndesis migrates to camel-k this won't be needed any more as camel-k takes care of generating the right "deployment" depending on the environment (i.e. it takes also into account knative services)

@phantomjinx
Copy link
Contributor

Gist for guidelines on converting DeploymentConfig to Deployment:
https://gist.github.com/bmaupin/d5be3ca882345ff92e8336698230dae0

Once syndesis migrates to camel-k this won't be needed any more as camel-k takes care of generating the right "deployment" depending on the environment (i.e. it takes also into account knative services)

Thanks @lburgazzoli. Yes you're right but we do need it at the moment though in converting the other Syndesis DeploymentConfig's, eg operator, syndesis-db. Converted the operator end-of-last-week.

@phantomjinx
Copy link
Contributor

Kubebox -> https://github.com/astefanutti/kubebox

  • A terminal & web console for kubernetes

Kubespy -> https://github.com/pulumi/kubespy

  • For observing kubernetes resources in real time

@phantomjinx
Copy link
Contributor

A blog on the interesting problems encountered in kubernetes development.

@phantomjinx
Copy link
Contributor

First experiment with an ingress (the kubernetes alternative to OS routes)

Enabling the dashboard and exposing it through an https ingress.

  • The dashboard is packaged as an addon in microk8s so need to enable it first.

  • Dashboard is configured with minimal privileges so need to create a service account and bind it to cluster-admin.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kube-system
  • Once added, can get dashboard-admin token in order to login to the dashboard (token is loooong string)
secret=$(kubectl -n kube-system get secrets | grep dashboard-admin | awk '{print $1}')
kubectl -n kube-system describe secret/${secret} | grep "token:" | awk '{print $2}'
  • Cannot use the token yet as the URL of the dashboard has yet to be determined.
  • Several methods to login:
  1. Port forward (see here)
  2. Use kubectl proxy in front (see here)
  3. Use an ingress - read on ...
  • Look at the dashboard service and find its IP, eg. 10.1.37.112. Then check the container port setting in the spec to confirm the exposed port, eg. 8443. Thus, it is possible with the likes of microk8s to access the IP directly in a browser and bring up the dashboard with https://<IP>:8443
  • This has its limitations because a changing of the service will change this IP and more importantly this IP is internal so not necessarily available.
  • The ingress can use paths and/or hosts to redirect to alternative services. However, struggled to get anything working with just paths so moved to using a host.
  • Host is a dns name that maps to the ip address of the endpoint specified in the service, eg. 127.0.0.1. So in this case, simply added an alias to localhost of kube.dash to /etc/hosts.
  • For TLS/SSL support need to first create a secret containing details of certificate. This requires a couple of steps:
  1. Obtain a certificate for the dns name, eg. kube.dash. Can go to Lets-Encrypt or create a self-signed certificate:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout kube-self-signed.key -out kube-self-signed.crt -subj "/CN=kube.dash/O=kube.dash"
  1. Create the secret:
namespace=kube-system # where dashboard is installed
name=dashboard-secret # name of secret referred to in ingress
kubectl -n $namespace create secret tls $name --cert=cert.crt --key=cert.key
  • Now can create an ingress resource like this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: dashboard
  namespace: kube-system
  annotations:
    nginx.ingress.kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  tls:
  - hosts:
    - kube.dash
    secretName: dashboard-secret
  rules:
  - host: kube.dash
    http:
      paths:
      - path: /
        backend:
          serviceName: kubernetes-dashboard
          servicePort: 8443
  • If all of this works correctly then navigating to https://kube.dash will display the dashboard.

Supplemental

  • Changing the port of the ingress controller from 443 to 8443. Modify the daemonset/nginx-ingress-controller, as below, and once updated a new ingress controller will be initialised. All ingress resources will then be accessed using https://:8443.
...
 spec:
      containers:
        - name: nginx-ingress-microk8s
          image: >-
            quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:0.25.1
          args:
            - /nginx-ingress-controller
            - '--configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf'
            - '--publish-status-address=127.0.0.1'
            - '--http-port=8080' # Add this argument to change http
            - '--https-port=8443' # Add this argument to change https
            ports: # Modify the ports that should be exposed
            - hostPort: 8080
              containerPort: 8080
              protocol: TCP
            - hostPort: 8443
              containerPort: 8443
              protocol: TCP
...

Not required but FYI

  • To enable ssl-passthrough on the nginx-ingress-controller, update the arguments in its daemonset by adding --enable-ssl-passthrough.

@phantomjinx
Copy link
Contributor

Openshift auto-generates a self-signed key/certificate combo when the service is given the following annotation:

annotations:
      service.alpha.openshift.io/serving-cert-secret-name: <name-of-secret-to-be-created>

This is responsible for the syndesis-oauthproxy-tls secret that is mounted by the syndesis-oauth-proxy.

@phantomjinx
Copy link
Contributor

microk8s basic auth csv format:

password,user,uid,"group1,group2,group3"

@phantomjinx
Copy link
Contributor

phantomjinx commented Mar 14, 2020

Since kubernetes distributions don't tend to come with an authentication/authorization identity-provider, it is necessary to install one then using OpenID Connect tie into it with oauth2_proxy. The latter to be used instead of the openshift oauth-proxy since it is designed to work with openshift.

Useful references for setting up keycloak as provider:

Alternative to keycloak is dex which can act as shim to google or github

Using keycloak in oauth2_proxy

@phantomjinx
Copy link
Contributor

First time syndesis executed on kuberenetes implementation!
syndesis-on-kubernetes

  • No requirement for separate keycloak installation. To use oauth2_proxy it is enough to breakout to github and authenticate then the oauth_proxy cookie is enough for the api server to produce output for REST requests
  • The oauth2_proxy, by default, produces an oauth2_proxy cookie while syndesis expects 'oauth_proxy'. However, the former has a handy --cookie-name switch which solves the problem

@phantomjinx
Copy link
Contributor

Summary of major issues to be addressed:

  1. The {{.Syndesis.RouteHostname}} is blank
  1. Cannot install/use ingresses on minishift since it is only 3.11 and does not support them

  2. Need to generate certicates for the oauth2_proxy in order to use TLS. Openshift does this automatically but of course on kubernetes this is not

  3. The image for oauth2_proxy is quay.io hence a need to modify the build/conf/config.yaml. This needs further work to add in coordinates for specifying the auth provider, client-id & secret

  4. Modify arguments of oauth2_proxy since they need to be broader than the openshift version of oauth_proxy

  5. Update route to be ingress although the difficulty is ensuring this will be backward-compatible

  6. Small changes in code required, including

  • route.Spec.host -> ingress.Spec.rules[0].Host
  • DeploymentConfig -> Deployment (especially when calling wait execution code expecting the former)
  • A definitive Platform attribute in the configuration to act as an if condition
  • Changes in RBAC rules to allow for ingresses

Conclusion

  • Careful evaluation of how to support kubernetes without damaging downstream working model or increasing complexity
  • Avoid needless duplication of code but balanced with trying to ascertain broadest platform of supportability

@rplescia
Copy link

I'm super excited about this development stream to port it to plain kubernetes

@medkbadri
Copy link

@phantomjinx good job
Is it possible to share a repo containing the modifications that you performed?
Thanks

@Ettery
Copy link

Ettery commented May 29, 2020

I'd love to support this but we are committed to vanilla Kubernetes on-prem and AWS as a cloud provider.

phantomjinx added a commit to phantomjinx/syndesis that referenced this issue Jun 13, 2020
* syndesis_types.go
 * Adds oauth secret properties to be specified in the CR. Used by k8
   for the auth provider credentials & tls comms certificate

* 04-syndesis-oauth-proxy...
 * Splits proxy template into OS & k8 versions
 * k8 version has image hard-coded since oauth2_proxy is required
 * k8 version has far broader config as it allows different providers
 * OS version generates the syndesis-oauthproxy-tls whereas the k8
   version cannot & requires this to be manually specified

* role.yml.tmpl
 * Adds ingress permissions

* ingress.yml.tmpl
 * Use ingress for k8 but retain route for OS since latter has ability to
   generate the route hostname

* action/install.go
* conduit.go
 * Uses new interface Conduit to wrap around ingress & route so install
   can interrogate them interchangeably.

* configuration.go
 * Moves Openshift flag to an ApiServer struct & track the version of k8
 * Adds non-OS checks on the RouteHostname & auth credentials/certificate
 * Adds routeHostname to SetRoute rather than asking to fetch it again
   since all instances of its use, the value is already known

* Only call checks on route host name & credentials when actual install

* Refactors syndesis tooling scripts for detecting platform and running
  the most appropriate functions

* Extra commands to supplement kubectl to make changing context easier

* README file for install instructions
@phantomjinx
Copy link
Contributor

PR for review -> #8697

phantomjinx added a commit to phantomjinx/syndesis that referenced this issue Jun 15, 2020
* syndesis_types.go
 * Adds oauth secret properties to be specified in the CR. Used by k8
   for the auth provider credentials & tls comms certificate

* 04-syndesis-oauth-proxy...
 * Splits proxy template into OS & k8 versions
 * k8 version has image hard-coded since oauth2_proxy is required
 * k8 version has far broader config as it allows different providers
 * OS version generates the syndesis-oauthproxy-tls whereas the k8
   version cannot & requires this to be manually specified

* role.yml.tmpl
 * Adds ingress permissions

* ingress.yml.tmpl
 * Use ingress for k8 but retain route for OS since latter has ability to
   generate the route hostname

* action/install.go
* conduit.go
 * Uses new interface Conduit to wrap around ingress & route so install
   can interrogate them interchangeably.

* configuration.go
 * Moves Openshift flag to an ApiServer struct & track the version of k8
 * Adds non-OS checks on the RouteHostname & auth credentials/certificate
 * Adds routeHostname to SetRoute rather than asking to fetch it again
   since all instances of its use, the value is already known

* Only call checks on route host name & credentials when actual install

* Refactors syndesis tooling scripts for detecting platform and running
  the most appropriate functions

* Extra commands to supplement kubectl to make changing context easier

* README file for install instructions
phantomjinx added a commit to phantomjinx/syndesis that referenced this issue Jun 18, 2020
* syndesis_types.go
 * Adds oauth secret properties to be specified in the CR. Used by k8
   for the auth provider credentials & tls comms certificate

* 04-syndesis-oauth-proxy...
 * Splits proxy template into OS & k8 versions
 * k8 version has image hard-coded since oauth2_proxy is required
 * k8 version has far broader config as it allows different providers
 * OS version generates the syndesis-oauthproxy-tls whereas the k8
   version cannot & requires this to be manually specified

* role.yml.tmpl
 * Adds ingress permissions

* ingress.yml.tmpl
 * Use ingress for k8 but retain route for OS since latter has ability to
   generate the route hostname

* action/install.go
* conduit.go
 * Uses new interface Conduit to wrap around ingress & route so install
   can interrogate them interchangeably.

* configuration.go
 * Moves Openshift flag to an ApiServer struct & track the version of k8
 * Adds non-OS checks on the RouteHostname & auth credentials/certificate
 * Adds routeHostname to SetRoute rather than asking to fetch it again
   since all instances of its use, the value is already known

* Only call checks on route host name & credentials when actual install

* Refactors syndesis tooling scripts for detecting platform and running
  the most appropriate functions

* Extra commands to supplement kubectl to make changing context easier

* README file for install instructions
@SvenC56
Copy link

SvenC56 commented Jul 2, 2020

If installation on Kubernetes will be possible. Will there also be a helm chart?

@phantomjinx
Copy link
Contributor

@SvenC56
Up until this point, I've never used helm but can certainly consider it.

@mingfang
Copy link

Please provide plain old docker images and Kubernetes yaml files.
No helm, operator, openshift specific stuff.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
cat/discussion This issues requires a discussion status/never-stale Marker that this issue should not be marked as stale
Projects
None yet
Development

Successfully merging a pull request may close this issue.