Ormuco
  • Solutions
    • Decentralized IaaS
    • Edge PaaS
    • Infrastructure Provisioning
    • Virtual SysAdmin
  • Why Ormuco?
    • Our Partners
    • Support
    • Use Cases
  • Pricing
  • About
    • About Us
    • Careers
    • Legal
  • Request a Demo
  • fr_CAFR
Ormuco
  • Solutions
    • Decentralized IaaS
    • Edge PaaS
    • Infrastructure Provisioning
    • Virtual SysAdmin
  • Why Ormuco?
    • Our Partners
    • Support
    • Use Cases
  • Pricing
  • About
    • About Us
    • Careers
    • Legal
  • Request a Demo
  • fr_CAFR

A Quickstart Guide to Deploying Qinling in Production

byGaëtan Trellu inDevelopers Corner posted onJuly 11, 2019
0
0
A Quickstart Guide to Deploying Qinling in Production

Qinling is an OpenStack project to provide Function-as-a-Service. This project aims to provide a platform to support serverless functions (like AWS Lambda). Qinling supports different container orchestration platforms (Kubernetes/Swarm, etc…) and different function package storage backends (local/Swift/S3) by nature using plugin mechanism.

Basically, it allows you to trigger a function only when you need it, helping you to consume only the CPU and memory time that you really need without requiring you to configure any servers. In the end, this makes for a lighter billing, making everyone happy. (There’s a lot more about Qinling online if you want to take a deeper dive.)

Deploying Qinling in production

Our platforms are deployed and maintained by Kolla, an OpenStack project to deploy OpenStack within Docker and configured by Ansible. The first thing I checked was to see if Qinling integrated with Kolla, alas…no.

When you have to manage production you don’t want or like to deal with custom setups that are impossible to maintain or to upgrade (that little voice in your head knows what I mean), so I started working integrating Qinling in Kolla, namely the Docker and Ansible parts.

The qinling_api and qinling_engine containers are now up and running, configured to communicate with RabbitMQ, MySQL/Galera, memcached, Keystone and etcd. The final important step is to authenticate qinling-engine to the Kubernetes cluster — I must admit this was the most complex to set up and that the documentation is a bit confusing.

Qinling and Magnum, for the win!

Our Kubernetes cluster has been provisioned by OpenStack Magnum, an OpenStack project used to deploy container orchestration engines (COE) such as Docker Swarm, Mesos and Kubernetes.

Basically, the communication between Qinling and Kubernetes is done by SSL certificates (the same ones used with kubectl), qinling-engine needs to be aware of the CA, the certificate and the key and the Kubernetes API endpoint.

Magnum provides a CLI which allows easily to retrieve the certificates, just make sure that you have python-magnumclient installed.

# Get Magnum cluster UUID
$ openstack coe cluster list -f value -c uuid -c name
687f7476–5604–4b44–8b09-b7a4f3fdbd64 goldyfruit-k8s-qinling
# Retrieve Kubernetes certificates
$ mkdir -p ~/k8s_configs/goldyfruit-k8s-qinling
$ cd ~/k8s_configs/goldyfruit-k8s-qinling
$ openstack coe cluster config --dir . 687f7476-5604-4b44-8b09-b7a4f3fdbd64 --output-certs
# Get the Kubernetes API address
$ grep server config | awk -F"server:" '{ print $2 }'

Four files should have been generated in ~/k8s_configs/goldyfruit-k8s-qinling directory:

ca.pem — CA — ssl_ca_cert (Qinling option)
cert.pem — Certificate — cert_file (Qinling option)
key.pem — Key — key_file (Qinling option)
config— Kubernetes configuration

Only ca.pem, cert.pem and key.pem will be useful in our case (config file will only be used to get the Kubernetes API), which from Qinling documentation will become these options:

[kubernetes]
kube_host = https://192.168.1.168:6443
ssl_ca_cert = /etc/qinling/pki/kubernetes/ca.crt
cert_file = /etc/qinling/pki/kubernetes/qinling.crt
key_file = /etc/qinling/pki/kubernetes/qinling.key

At this point if qinling-engine has restarted, then you should see a network policy created on the Kubernetes cluster under the qinling namespace (yes, you should see that too).

The network policy mentioned above could block the incoming traffic to the pods inside the qinling namespace which result in a timeout from qinling-engine. A bug has been opened about this issue and it should be solved soon, so right now the “best” thing to do is to remove this policy (keep in mind that every time than qinling-engine will be restarted the policy will be re-created).

$ kubectl delete netpol allow-qinling-engine-only -n qinling

Just a quick word about the network policy created by Qinling. It has the objective to restrict the pod access to a trusted CIDR list (192.168.1.0/24, 10.0.0.53/32, etc…) preventing connections from unknown sources.

One common issue is to forget to open the Qinling API port (7070), this will prevent the Kubernetes cluster to download the function code/package (it’s time to be nice with your dear network friend ^^).

Runtime, it’s time to run!

One of Qinling pitfalls is the “lack” of runtime, preventing Qinling to be widely adopted, the reason why there are not that much is because of security reasons (completely understandable).

Actually, in the production environment (especially in the public cloud), it’s recommended that cloud providers supply their own runtime implementation for security reasons. Knowing how the runtime is implemented gives the malicious user the chance to attack the cloud environment.

So far, “only” Python 2.7, Python 3 and Node.JS runtimes are available, it’s a good start but it would be nice to have it for Golang and PHP too (just saying, not asking).

Conclusion

My journey has just begun and I think Qinling has a huge potential, which is why I was a bit surprised to see the project isn’t popular as it could be.

Having it in Kolla, improving the documentation for integration with Magnum, Microk8s, etc… and providing more runtimes would help the project to gain the popularity it deserves.

Thanks to Lingxian Kong and the community for making this project happen!

This post has appeared on Superuser and Medium.

OpenStack Summit

Share:
Gaëtan Trellu

Gaëtan manages the TechOps team to implement and stabilize new features on the Ormuco Infrastructure-as-a-Service platform. Gaëtan has previously worked as a DevOps Engineer and as the CloudOps Lead at Ormuco. He is self-taught in IT, with particular expertise in the Unix and Linux ecosystems.

Previous

Not Sure if Bare Metal Servers Are for You? This Might Help You Decide.

Next

How to Set Up Ambassador with Socket.io

Keep on reading

Currying Functions in JavaScript, Recursively
July 25, 2019
Currying Functions in JavaScript, Recursively
No Comments
How to Set Up Ambassador with Socket.io
July 18, 2019
How to Set Up Ambassador with Socket.io
No Comments
Making a React application with reverse proxy and environment-awareness
August 1, 2019
Making a React application with reverse proxy and environment-awareness
No Comments

Search

Categories

  • Business
  • Developers Corner
  • Press Releases
  • Technology

Ormuco Inc.

Ormuco's virtual user interface runs any application by bypassing processing on end-user device operating systems and cloud computing. With Ormuco, you can launch all your apps through a single platform that’s safer and faster than using apps on your device and in the cloud

Request a Demo

Company

  • Who Is Ormuco
  • Why Choose Ormuco
  • Newsroom
  • Careers

Solutions

  • Decentralized IaaS
  • Edge PaaS
  • Infrastructure Provisioning
  • Virtual SysAdmin

Legal

  • Privacy Policy
  • Pricing

Get in Touch

  • info@ormuco.com
© 2020 Ormuco Inc.