If you want to give people the opportunity to deploy your project instantly on Scalingo, you can setup a deploy button on your GitHub project or even on your website. A scalingo.json file is required at the root of your GitHub project in order to generate the deployment page. At the moment, only.
With the rise of CI tools like Jenkins/Gitlab and Config management tools like Salt/Ansible, Continous integration became so flexible. Now if we check, most of the Projects are using GIT as a Version control and CI tools like Jenkins to build and test the packages automatically whenever any change is pushed to the repo. And finally once the build is successful, the packages are pushed to repo so that config management systems like Salt/Puppet/Ansible can go ahead and perform the upgrade. In my previous blogs, i’ve explained on how to build a Debian package and how to create and manage APT repo’s via aptly.
In this blog i’ll explain how to automate these two processes. So the flow is like this. We have a Github repo, and once a changed is pushed to the repo, Github will send a hook to our Jenkins server which in turn triggers the Jenkins package build. Once the package has been successfully built, jenkins will automatically add the new packages to our repo and publish the same to our APT repo via aptly Installing Jenkins First, let’s setup a Jenkins build server. $ wget -q -O - sudo apt-key add - $ echo 'deb binary/' /etc/apt/sources.list.d/jenkins.list $ apt-get update && apt-get install jenkins $ /etc/init.d/jenkins restart Once the Jenkins service is started, we can access the Jenkins UI via ” ”.
By default there is no authentication for this URL, so accessing the URL will open up the Jenkins UI. Creating a Build Job in Jenkins In order to use a Git repo, we have to install the Git plugin first. In Jenkins UI, Go to ” Manage Jenkins” – ” Manage Plugins” – ” Available” and search for ” GIT plugin” and install it. Once the Git plugin has been installed we can create a new build job.
Click on ” New Item” on the Home Page and Select ” Freestyle Project” and Click on “OK”. On the Next page, we need to configure all the necessary steps for build job. Fill in the necessary details like Project Name, Description etc. Under “Source Code Management”, select Git and enter the Repo URL. Make sure that the jenkins user has access to the repo. We can also use Deploy keys, but i’ve generated a separate ssh key for Jenkins user and the same has been added to Github. Under ” Build Triggers” select ‘Build when a change is pushed to GitHub’ so that Jenkins will start the build job everytime when a change has been pushed to repo.
Under the Build section, Click on ” Add build step” and select ’ Execute shell’ and let’s add our package build script which is stage 1. Set -e set -x debuild -us -uc In Stage 2, i’m going publish my newly built packages to my APT repo aptly repo add myapt./openvpn.deb /usr/bin/env script -qfc 'aptly publish -passphrase= update myapt' If you see my above command, i’ve used the script command. This is because, i was getting this error “ aptly stderr: gpg: cannot open tty /dev/tty': No such device or address“, whenever i try to update a repo via aptly using Jenkins. This is due to a in aptly.
The fix has been placed on the Master branch but its not yet released. The script command is a temporary work around for this bug. Now we have a Build job ready. We can manually trigger a build to test if the Job is working fine. If the build is successfull, we are done with our build server. Now the final step is Configuring Github to send a trigger whenever any change is pushed to Github. Configuring Github Triggers Go the Github repo and Click on the Repo settings.
Open ” Webhooks and Services” and select ” Add Service” and select ” GitHub plugin“.Now it will ask for Jenkin’s Hook URL, which is ” ” and add the service. Once the service is set, we can click on “Test service” to check if the webhook is working fine. Once the test hook is created, go to the Jenkins job page and select ” GitHub Hook Log”.
The test hook should get displayed there. If not there is something wrong on the config. Now we have a fully automated build and release management.
Config management tools like Salt/Ansible etc. Can go ahead and start the deployment process.
Docker-openvpn-kube-for-mac/changelog.md At Mastercard
In my previous, i’ve explained how to build a Debian pacakge from source. In this blog i’m to explain how to create and manage our own apt repository. Enter,is a swiss army knife for Debian repository management: it allows us to mirror remote repositories, manage local package repositories, take snapshots, pull new versions of packages along with dependencies, publish as Debian repository. Aptly can upload the repo to Amazon S3, but we need to install APT S3, in order to use it from S3. First, let’s install aptly on our build server.
A more detailed documentation on installation is available in the $ echo 'deb squeeze main' /etc/apt/sources.list $ gpg -keyserver keys.gnupg.net -recv-keys 2A194991 $ gpg -a -export 2A194991 sudo apt-key add - $ apt-get update && apt-get install aptly Let’s create a repo, $ aptly repo create -distribution=wheezy -component=main my-repo # where my-repo is the name of the repository Once the repo is created, we can start adding our newly created packages to our new repo. $ aptly repo add # in my case aptly repo add myrepo openvpn2.3.6amd64.deb The above command will add the new package to the repo. Now in order to make this repo usable, we need to publish this repo.
A valid GPG key is required for publishing the repo. So let’s create the gpg key for aptly. $ gpg -gen-key $ gpg -export -armor myrepo-pubkey.asc # creates a pubkey that distributed $ gpg -send-key KEYNAME # This command can be used if we want to send the key to a public server, we can also pass -keyserver, if we want to specifiy a specific keyserver Once we have our GPG key, we can publish our repo. By default aptly can publish the repo to S3 or it can publish it locally and we can use any webserver to servce this repo. $ aptly publish -distribution='wheezy' repo my-repo Once published, we can point the webserver to “/.aptly/”, where our repo files will be created. Aptly also comes with an embedded webserver which can be invoked by running aptly serve. Aptly really makes the repo management so easy.
We can actually integrate this into our jenkins job so that each time when we build a package, we can directly add and upload the same to our repository. Installing applications via packages saves us a lot of time. Especially being an OPS oriented guy, compiling applications from source is somtimes pain and time consuming. Especially the dependencies. But later, after the rise of config management system, people started creating corresponding automated scripts that will install necessary dependencies and the ususal make && make install.
But if you check applications like Freeswitch was taking 15+min to finish compiliations, which is defintely a bad idea when you want to deploy the a new patch on a cluster. In such cases packages are really a life saver. Build the packages once as per our requirement and deploy it throughout the infrastructure. Now with the tools like jenkins,TravisCI etc we can attain a good level of CI.
In this blog, i’m going to explain on how to build a debian package from scratch. First let’s install two main dependencies for a build machine $ apt-get install devscripts build-essential For the past few days i was playing with OpenVPN and SoftEther. I’m going to build a simple debian package for OpenVPN from source. The current stable version of OpenVPN is available 2.3.6. First let’s get the OpenVPN source code.
$ wget $ tar xvzf openvpn-2.3.6.tar.gz && cd openvpn-2.3.6 Now for building a package, first we need to create a debian folder. And in this folder we are going to place all necessary files required for building a package. $ mkdir debian As per the Debian pacakging, the mandatory files are,. Changlog file content should match the exact syntax, otherwise packaging will fail at the initial stage itself. There are some more optional files that we can. Ever since the entry of Docker, everyone is busy porting their applications to Docker Containers. Now with the tools like Mesos, CoreOS etc we can easily achieve scalability also.
@Plivo we always dedicate ourselves to play around such new technologies. In my previous blog posts, i’ve explained how to containerize the Freeswitch, how to perform some basic load test using simple dialplans etc. My previous load tests required a bunch of basic Freeswitch servers to originate calls to flood the calls to the FreeSwitch container. So this time i’m going to use a simple method, which everyone can use even from their laptops. SIPp is a free Open Source test tool / traffic generator for the SIP protocol.
But the main issue for beginer like me is in generating a proper XML for SIPp that can match to my exact production scenarios. After googling, i came across a super simple ruby wrapper over SIPp called. SIPpycup is a simple ruby wrapper over SIPp.
We just need to create a simple yaml file and sippycup parses this yml file and generates the XML equivalent which will be then used to generate calls. Sippycup can also be used to generate only the XML file for SIPp. Setting up sippycup is very simple. There are only two dependencies 1) ruby (2.1.2 recomended) 2) SIPp Another important dependency is our local internet bandwidth.
Flooding too many calls will definitely result in network bottlenecks, which i faced when i generated 1k calls from my laptop. Now let’s install SIPp.
Sudo apt-get install pcaputils libpcap-dev libncurses5-dev wget 'tar zxvf sipp.svn.tar.gz # compile sipp make # compile sipp with pcapplay support make pcapplay Once we have installed SIPp and ruby, we can install sippycup via ruby gems. Gem install sippycup Configuring sippycup First we need to create yml file for our call flow. There is a good documentation available on the on various options that can be used to create the yml to suit to our call flow. My call flow is pretty simple, i’ve a DialPlan in my Docker FS, which will play an mp3 file. So below is a simple yml config for this call flow source: destination:: maxconcurrent: callspersecond: numberofcalls: touser: # = should match the FS Dialplan steps: # call flow steps - invite # Initial Call INVITE - waitforanswer # Waiting for Answer, handles 100, 180/183 and finally 200 OK - ackanswer # ACK for the 200 OK - sleep 1000 # Sleeps for 1000 seconds - sendbye # Sends BYE signal to FS Now let’s run sippycup using our config yml sippycup -r test.yml Below is the output of a sample load test.
Redis is an open-source, networked, in-memory, key-value data store. It’s being heavily used every where from Web stack to Monitoring to Message queues. Monitoring tools like Sensu already has some good scripts to Monitor Redis. Last Month during, opensourced a new rate limited queue called which is based on Redis. So apart from just Monitoring checks, we decided to have a tsdb of what’s happening in our Redis Cluster.
Since we are heavily using ELK stack to visualize our infrastructure, we decided to go ahead with the same. CollectD Redis Plugin There is a cool CollectD for Redis. It pulls a verity of Data from Redis which includes, Memory used, Commands Processed, No. Of Connected Clients and slaves, No. Of blocked Clients, No. Of Keys stored/db, uptime and challenges since last save. The installation is pretty simple and straight forward.
$ apt-get update && apt-get install collectd $ git clone /tmp/redis-collectd-plugin Now place the redisinfo.py file onto the collectd folder and enable the Python Plugins so that collectd can use this python file. Below is our collectd conf Hostname ' Interval 10 Timeout 4 Include '/etc/collectd/filters.conf' Include '/etc/collectd/thresholds.conf' LoadPlugin network ReportStats true LogLevel info Include '/etc/collectd/redis.conf' # This is the configuration for the Redis plugin Server ' ' Now copy the redis python plugin and the conf file to collectd folder. $ mkdir /etc/collectd/plugin # This is where we are going to place our custom plugins $ cp /tmp/redis-collectd-plugin/redisinfo.py /etc/collectd/plugin/ $ cp /tmp/redis-collectd-plugin/redis.conf /etc/collectd/ By default, the plugin folder in the redis.conf is defined as ‘/opt/collectd/lib/collectd/plugins/python’. Make sure to replace this with the location where we are copying the plugin file, in our case “/etc/collectd/plugin”.
Now lets restart the collectd daemon to enable the redis plugin. $ /etc/init.d/collectd stop $ /etc/init.d/collectd start In my previous, i’ve mentioned how to enable and use the ColectD input plugin in Logstash and to use Kibana to plot the data coming from the collectd. Below are the Data’s that we are receiving from the CollectD on Logstash, 1) typeinstance: blockedclients 2) typeinstance: evictedkeys 3) typeinstance: connectedslaves 4) typeinstance: commandsprocessed 5) typeinstance: connectedclients 6) typeinstance: usedmemory 7) typeinstance: -keys 8) typeinstance: changessincelastsave 9) typeinstance: uptimeinseconds 10) typeinstance: connectionsreceived Now we need to Visualize these via Kibana. Lets create some ElasticSearch queries so that visualize them directly. Below are some sample queries created in Kibana UI. 1) typeinstance: 'commandsprocessed' AND host: ' 2) typeinstance: 'usedmemory' AND host: ' 3) typeinstance: 'connectionsreceived' AND host: ' 4) typeinstance: '-keys' AND host: ' Now We have some sample queries, lets visualize them. Now create histograms in the same procedure by changing the Selected Queries.
Kube-openvpn Synopsis Simple OpenVPN deployment using native kubernetes semantics. There is no persistent storage, CA management (key storage, cert signing) needs to be done outside of the cluster for now.
I think this is better - unless you leave your keys on your dev laptop. Motivation The main motivator for this project was having the ability to route service requests back to local apps (running on the VPN client), making life much easier for development environments where developers cannot run the entire app stack locally but need to iterate on 1 app quickly. Usage First, you need to initialize your PKI infrastructure. Easyrsa is bundled in this container, so this is fairly easy. Replace OVPNSERVERURL with your endpoint to-be. $ docker run -user=$(id -u) -e OVPNSERVERURL=tcp://vpn.my.fqdn:1194 -v $PWD:/etc/openvpn:z -ti ptlange/openvpn ovpninitpki Follow the instructions on screen. Remember (or better: securely store) your secure password for the CA.
You are now left with a pki folder in your current working directory. Generate the initial Certificate Revocation List. This file needs to be updated every $EASYRSACRLDAYS. All clients will be blocked when this file expires.