- worked a lot lately with GitLab and I am quite convinced of the nice workflows and good integration of services
- therefore I want to implement the project in some manners of GitOps
- the cloudservice I wanted to use was actually Linode, but since we have a free AWS account via BTH and lots of companies out there are using it widely, I will stick with AWS for this project
- for the container orchestration I will use Kubernetes, as it is the most prominent cloud orchestration right now
## GitOps
- central to the GitOps build is the usage of Git and Gitlab (Github can be used as well) and the usage of merge-requests for all changes on made to the project
- the build will be splitt into two projects (infrastructure and application code) which are organinized in one repository-group
- the group can shares ressources produced by the different repositories
- the group can can therefore link pipelines
- the group can also organize team members, roles and many more
- ideally the app project will produce a runnable container which will then be used in the infrastructure project via container registry
- the whole cloud infrastructure will be controlled via terraform
<!-- Put some info graphic in here visualizing the whole setup and also the pipeline -->
### The Infrastructure Project
This project will define two clusters: one test cluster and one production cluster. Both clusters will be organized on the aws account. Hopefully there are no limitations which will break with this plan. Otherwhise I have to accquire another test account for another cloud service provider to declare the second cluster. Those clusters represent my two environments: test and production.
On the test cluster, all tests and integration tests will be run.
The prodcution cluster should only get deployed on changes to the main branch or on manual upgrades.
Build into GitLabs cluster functionality there are build in apps which you can run onto of your kubernetes cluster. One of those apps is prometheus, which I want to use as a monitoring tool for both clusters.
### The App Project
Oviously this project will hold all the application code for the front- and backend. Commits to the main branch can trigger deploys to the production cluster. 'normal' commits will start the test suite on a local test environment which will be run inside a gitlab runner. Inside the runnner all the services will run on a single container/runtime environment, since there are no special needs to the gitlab runner, other than nodeJS and monodb.
It should be possible to run the builds also on the test cluster, but I´m not sure yet what would be the best approach. In the app repository I want to develope the app, not the infrastrucutre. There could be sideeffects when I deploy it to the test cluster and one can not reset it properly. So it can be nice to have this option to test the integration but for actual unit test it's an overkill.
## Workflow
### App Repo
Branch Structure:
```mermaid
graph LR
F[Feature]-->D[Dev]
D-->M[Main]
```
Stages:
```mermaid
graph LR
B[Build]-->T[Test]
T-->DP[Deploy/Publish]
```
### Infra Repo
Branch Structure:
```mermaid
graph LR
F[Feature]-->D[Dev]
D-->M[Main]
```
Stages:
```mermaid
graph LR
V[Validate]-->P[Plan]
P-->DP[Deploy]
```
```mermaid
graph LR
subgraph infra[Infra Repo]
T-->A("Configuration Management Ansible")
A-->C["Kubernetes Containers"]
T-->|provisions|C
C-->F(Nginx: 81)
C-->B(NodeJS: 4001)
C-->DB[(MongoDB)]
end
style infra color:white,fill:#f9ff9,stroke:yellow
style B color:black,fill:#f91,stroke:#313,stroke-width:4px
style F color:white,fill:#191,stroke:#313,stroke-width:4px
style B color:black,fill:#f91,stroke:#313,stroke-width:4px
style F color:white,fill:#191,stroke:#313,stroke-width:4px
```
### Linode Cloud Plattform
- starting with the cloud provider, since my provisioning tooling has to support it (otherwise I would end up writing providers)
- linode seems to have a really cheap solutions with 100€ starter-bonus
- furthermore it seems to be battle tested system with better (personal) sympathy than AWS or GCP, alternativley AWS since we I have a stundent account already set up
- very good documentation
- really good terraform provider
- server locations in europe/germany
### Terraform
- provisioning will happen via *terraform*
- it's cloud agnostic, so no vendor lock in etc.
- terraform is also suitable for setting up a load balancer
- if there is a benefitial setup I will use **ansible** for *Configuration Management*
### Kubernetes
- terraform sets up kubernetes
- Linode includes a Linode Kuberenetes Engine for managing kubernetes stuff
- first node is primary node and manages load
### VSC
- the software is managed in a Gitlab Repo
- CI will be triggered by commits onto master or tags
### CI / CD
- ~~I want to test CircleCI~~ I will stick with what i kno best for CI/CD: **Gitlab**
- hosted by Beuth Hochschule
- The CI will have also 3 stages: test / build / deploy
- after successfully building and testing it will trigger the deployment process
- the deployment will only start on expected git:tags or if the branch equals "master"
### Nginx (frontend)
- listening on port 80
- CD (via Ansible) has to move the dist folder of the react app into the public html directory
- nginx setup has to point to that certain folder and serve it
- nginx has to catch requests which are neither directed to client nor backend-services
### NodeJS (backend)
- listening on own port 4001
- reached via http calls from the app (defined as "BASE_URL" env variable)
- the reverse proxy will reach trough requests to the API-Url
- does not include the mongodb
### Testing
- testing the single parts (front/backend) of the application on the CI Server
- integration test with local setup via makefile ?
- integration tests on seperate cloud environment "staging" ??
- TBD
### MongoDB Peristent Volumen
- there will be a Persitent Volume (PV) some other pods can claim storage from
- Perstitent Volume Claims (PVC) will declare their need for memory/cpu, in our case simply 1Gigabyte
- the setup will consinst of one primary Pod running MongoDB which can replicate data to other secondary volumes
- I may utilize **portworx** as a block storage layer organiser
- W.I.P.
### Monitoring
- I like the look and feel of Prometheus / Grafana
- Kubernetes has monitoring by itself? Other stuff via Helm?
- TBD
<!--  -->
```mermaid
graph LR
R[Request] -->|www|C[Firewall/Loadbalancer]
C-->RP
RP-->F
RP-->B
B-->|http|DB1
subgraph subID[replicable VPS]
RP{Nginx: Reverse Proxy :80 }
F(Serve React Frontend)
B(NodeJS: Backend : 4001)
end
subgraph PortWorx Persistent Volumes*
DB1[(MongoDB Primary)]-->DB2
DB2[(MongoDB Secondary/Replica)]
end
style subID color:white,fill:#f9ff9,stroke:yellow
style B color:black,fill:#f91,stroke:#313,stroke-width:4px
style F color:white,fill:#191,stroke:#313,stroke-width:4px
```
## Questions and Improvments:
- The (static files of the) react app is going to get served directly via the reverse proxy. Or is it better to spin up another nginx instance for that?
- Is is legit to use cloud provider specific kubernertes utilities? (like Linode Kubernetes Engine)
- What exactly does this requirement mean?
> when deploying a new version, the app must continue to be reachable; this depends on the deployment strategy that has been chosen
- service resource (nxing ingress), keine Trennung von Front-End und Backend nötig