Monday, July 24, 2017

Puppet – my take on it

Puppet is a technology that is mentioned in multiple venues such as conferences, webinars, open source conferences etc. I never had time to look at it before. But recently came across Puppet Learning VM and considering I have some time on my hands dived right into it.

Let me start with conclusion – it’s amazing technology that is bit too complicated for an average admin.

Puppet Learning VM covers most of the concepts and how those concepts are related to each other. Everything is built in form of configuration files. You start with one simple application and move towards its customization, extension and orchestration.

To grasp the concepts, I also signed for Self-Placed (FREE) training modules that Puppet published on their web-site  The only annoying thing – you get multiple emails when you register for a learning unit. An average unit is about 10 minutes of information and there is about dozen of them, so you get 30+ emails when going through the videos. I find it a bit excessive. (maybe my personality issue)

To summarize:

  • Building blocks (called Resources) are Files, Packages and Services
  • The blocks are organized in Classes to be managed together
  • Facter is the service that collects current state of the end-system with all possible details (not necessary changeable by puppet – i.e. OS version, MAC address etc.)
  • Most of the Modules, Services and <anifests are not part of Puppet and written by community and repository called Forge
  • Additionally, Roles and Profiles are used to categorize Resources and End-user systems even more to make Puppet implementation scalable


Most of learning VM Modules are written well. I had a bit of challenge with ‘quest’ – a program that supposed to monitor students’ progress (I’m guessing it also prepares your learning environment for specific exercises). Sometimes quest didn’t recognize completed tasks or got confused with sequential tasks i.e. Task 3 changes file that was already changed in Task 1, making Task free completed and Task 1 not completed. Also, it seems that “Defined Resource Types” quest doesn’t really have steps outlined and more just one monolithic piece of text.

As a last thing around Puppet Learning VM – the troubleshooting chapter is very helpful and allowed me to resolve most of the items that I faced.

To summarize – if you need to manage and standardize your environment – Definitely Puppet is the tool that needs to be looked at. I must disagree with the statement from Packet Pushers Episode “Datanauts 093: Erasure Coding And Distributed Storage”  where Puppet is mentioned as not advanced solution. Puppet can do a lot and deserves to be in demand.


Tuesday, July 18, 2017

Ways to automatically deploy Multi-Machine Blueprint in AWS

This week I was working on a small engagement on automating AWS Deployment. Instead of covering the details of this specific project – I’d like to discuss the ways the deployment can be done in AWS and hope some of the readers can come up with more suggestions.

First of all it’s really unbelievable on how everything is documented and examples are provided with every statement.

Additionally, there are things that in most platforms you have to program and tweak yourself – in AWS the functionality is provided out-of-the-box. In example – when your code needs to wait for deployment – usually you would create a cycle that will be checking (kind of pinging) service until it responds. In AWS it is a built-in function – you can “pause” your script until the component is deployed (or destroyed etc) using a single command – no cycles, no “pinging”.

So let’s look what tools can be used to deploy Multi-Machine Blueprint (MMB can be VMware term). There is AWS Command-line interface – it comes in Windows and Linux flavors. The commands and parameters are identical that makes it easy to migrate script from batch file (.bat) on windows to Linux shell script. As you probably know both of those are simple set of commands that executed sequentially.

Numerous parameters can make command pretty heavy to read – in AWS there is always way to define those parameters (such as Security Groups, VPCs, ELBs etc) in form of JSON file.


--generate-cli-skeleton creates empty json template that has all required parameters fields in it



What I was doing to streamline the process is using AWS UI to create an object than export the object details into json using describe-<object> command and then moving necessary parameters to my own template to create a new object with this template.

The only caveat I found is getting specific parameter extracted from JSON output to be used in different command. I’m using Windows machine (even my familiarity with Linux command line is similar to my Windows skill - modest me). So I had to experiment with set “for /f” parameters (old ms-dos).

Below is a simple example of how you can get Load Balancer DNS name extracted from JSON generated with aws elb describe-load-balancers --load-balancer-names <load balancer name>. Make sure you specify only one load-balancer – if your file has data for multiple Load Balancers – the variable will be set to the last found in the file.

To run it you have to use .bat file – it will not run as a simple command line command.



@echo off
 for /f usebackq^ tokens^=2^,4^ delims^=^" %%a in ("blog.json") do ( if /i "%%a"=="DNSName" set "myelbhost=%%b"    )
 echo %myelbhost%


While writing this article I just discovered (thank you to this article) that –-query parameter can be used for a-la-unix “piping” aka “|” – however you still have to create a batch file – no direct command will be accepted. More details on --query can be found here, Below is the code that needs to be put into batch file. Please note that quotes, double quotes and some other formatting of query parameters are slightly different. I believe it’s Windows specifics vs Linux ones.

@echo off
for /f "delims=" %%A in ('aws elb describe-load-balancers --load-balancer-names <load balancer name> --query LoadBalancerDescriptions[*].{URL:DNSName} --output text') do set "myelbhost1=%%A"
echo %myelbhost1%

Hope it will help to some of you searching for elegant solution of JSON to Variable problem

Lets conclude – Deployment can be done through:

  •           AWS UI - easy but not automated not human error-prone
  •           Windows command-line with AWS CLI tools installed (or batch file)
  •           Linux command-line with AWS CLI tools installed (or through Linux-shell script)
  •           The same can be achieved through AWS SDK that uses very well documented API calls
  •           Additionally, CloudFormation template can be used to deploy MMB or if a specific application is used than Beanstalk can be leveraged to automate more – here is a good high-level explanation of how these two work together. 
There is probably more such tools and I’d like to hear your ideas on what else can be used to automate deployment in AWS.

I’ll do my best to cover long-promised Puppet in my next post.

PS And thank you to the code formatter for saving me some time writing this

Thursday, July 13, 2017

First discoveries of Technology community in Vancouver, BC

My exploration of Vancouver tech scene logically started with Tech Vancouver meetup  The huge room in TELUS Garden was packed with 140+ entrepreneurial technologist and it is very impressive. I would spent at least an hour speaking with each and every person in that room – certainly it is not possible.

The biggesteye opening for me was - presentation of Geordie Rose, Founder & CEO of Kindred AI -  where he mentioned that Vancouver has two technology companies that are only Canadian companies ever made to MIT 50 Smartest Companies. (the other one is D-Wave Systems – the only commercial manufacturer of quantum computing).


Than I had a chance to attend Kubernetes Meetup which was also very interesting. It was hosted by Again full room of people – great insights on how container technology should be used today in terms of available tools, technologies and processes.

Additionally a lot of feedback how technology needs to evolve to be better job. There is a lot of limitations, caveats and “it depends” scenarios. When you start working with containers you realize pretty fast how serious is the gap between vendors marketecture and the reality.

In example – short spikes in your application (such as log collection or email generation) might trigger expansion of application footprint that alos can cause physical resource exhaustion with no real reason for it.

As an additional learning term Sidecar was mentioned. More on Sidecar and other similar terms here

Lastly I’d like to say huge thank you to Google for Cloud OnBoard Vancouver!

Again great participation and networking. Based on this and Kubernetes meetup – it seems that Kubernetes users’ first choice is Google Cloud Platform (GCP). As Kubernetes is born inside of Google – GCP has native support of the container orchestration platform.

Also GCP is based on open technologies – not sure what it means exactly but they play against vendor lock-in model. Comparing to AWSome Days in March – this event was a bit too basic. But Google fixed it – providing every attendee with an invite to online GCP fundamental hands-on training created by QWIKlabs. Comparing to AWS I found that the built-in console is very convenient and saves few minutes to setup a separate instance and configure remote secure access (it’s how AWS does it). If AWS is more focused on infrastructure, GCP has very developer oriented approach. Which is awesome for developers (demo app in the QWIK labs was deployed several different way/form-factors) however for me as an IT Professional AWS is easier to absorb as they use the same language as Datacenter guys. If you want to play with GCP – you should use this offer of US$300 credits for 12 months.


Next week I'll try to explain my experience learning Puppet orchestration

Wednesday, July 5, 2017

It's a hobby!

Thank you to all (several dozens) of  "congrats on new job" on my LinkedIn page! I really appreciate community support! Also it is a good topic for a new post on my Blog!

LinkedIn has been very persistent that for better chances to be noticed as a potential contributor - I need to have current position listed. After starting this blog - I added it as a position, which caused the confusion. Please accept my apologies for that!

To be clear - Blogger is not a job - it's a hobby!

Additionally I wanted my blog to have a short and sharp name. After querying registration engines for a couple hours when sitting on a beach in White Rock, BC - my luck came and CrispyFog was born. Now it's the official URL for this blog!

What this name means? Not much... Personal Computing became Cloud Computing. Cloud Computing became huge complex construction with number of angles to look at and thousands of technologies to make it better, more useful, user
-friendly, automated, orchestrated etc. So it's how Cloud became Fog (now Fog is almost an industry term).

So my quest here is to make this fog more definite - translate what industry wants to do to what is possible today and what can come tomorrow. One of my favorite explanation of actions that my colleagues can see as pessimistic - "I'm not being negative, I'm a realist" If all marketing buzz is cut out and real life use-cases is tested with what we have today in terms of technology - it can be seen as much clearer (crispier picture). It's how CrispyFog was created and also it's easy to remember and type.

In terms of my job search. Two weeks ago after coming back from vacation I started too aggressively. Now I have multiple conversations going with number of recruiters, employers and friends. I almost have to stop and push back. I'm still trying to figure out what I would love to do next that will keep me excited for awhile. Also an opportunity to enjoy summer of 2017 is priceless, especially when there is no need to run to the office if my 1.5y.o baby-D wants to play in a sand with her papa.

In meantime I probably will do some contract work to keep my skills sharp and learn something new. Relocation to Silicon Valley would provide much more opportunities but as a family we're not ready for it at the moment. Seattle is a maximum what I can get my wife to agree on. After digging a bit - there are still a couple of very exciting companies in Vancouver and possible remote work for others.

Hope it's not too long and clarifies situation! Thank you if you read whole eight paragraphs!

Next blog will be covering a technology (I hope Puppet). I'll try to limit my life-update posts as much as I can.