Applying DevOps with CHEF


DevOps is taking a very important place within organisations that wish to optimise their response time to business needs. By joining forces, Developer and Operations folks are building a whole set of tools and practices to create a Continuous Delivery environment. One of the pillars of those trends is the Infrastructure as Code that, powered by technologies like the cloud, turns infrastructure into code that is controllable, testable and repeatable. In this article, we will be exploring a platform named Chef which enables us to turn our infrastructure into code on a very pleasant way.

Desired State Configuration

There are two words that make anyone who works on software development world at least, lift eyebrows: Production environment (let’s refer it as PROD from now on). If in one hand there’s a common understanding that there’s no space for mistakes on such a “holy” environment, on the other hand we must always have a backup plan if something goes wrong. Well, we all know stories where certain issues were detected only on PROD, even on companies with a full quality assurance workflow in place, which generally includes at least 3 different environments such as: Development Environment (DEV), Quality Assurance Environment (QAT) and Production (PROD). In the end, it is all about the absence or presence of a specific configuration that affects the system stability. This is where the so called “Desired State Configuration” (DSC) concept comes in place.

As the name self-explains, DSC basically describes the desired state required to run the solution accordingly. Basically, it is a set of configurations to be made on the environment in a way so that the solution would run flawlessly. Normally it is built by the solution architect and is known by the solution development team. On this article we will be using a Web Application named that uses the well-known “Adventure Works” database. Here you can have a very simple DSC to run it:

The Desired State Configuration

In order to have a consistent behavior across all application environments (DEV, QAT, PROD), we must assure that each one of them have exactly the same configuration as our application demands. A single discrepancy on any of those configurations among the environments might give you some headache on your next version update on any of the environments, especially on PROD as you are aware of Murphy’s law.

Additionally, people management brings more complexity to our software development outlook with 2 major roles being involved. In one side you have the developers, in charge of build the solution using certain technologies. On the other side, you have operations folks, in charge of keep it all running. It is very natural that these 2 roles conflict on certain aspects. Developers might be very eager on updating versions, getting new components and drive innovation. Operations are more concerned on keep everything running smoothly with as less changes as possible, with a clear understanding that changes usually create unforeseen situations, which may affect system stability.

The reality is that the final user sees only the running solution. They barely understand the difference between a Developer or Operations job.  If any of those jobs fail, the user will simply say: it is not working! It is time to understand that it makes no sense to split development and operations roles, because in the end, we are only one: Development + Operations. We are DEVOPS! It is time to get the best practices of both sides. Let’s take the Coding skills form Developers and Supporting skills from Operations by using proper tools. Together is better!

Infrastructure as Code

As the name self explains, Infrastructure as code is a concept that turn your infrastructure on a bunch of code, allowing it to be versionable, testable and repeatable, just like any application code. Once you have a code that reflects your infrastructure, you would be able to easily share it among all environments (PROD, DEV, UAT, etc.), reducing drastically the odds of a significant difference among them.

Additionally, by having a code you can take full advantage of source control tools like GIT, meaning that changes on the infrastructure could be easily tracked and controlled. Nevertheless, your environment will be coded on a language that the whole team would understand, which leads to a situation where you rely more on your process rather than certain individuals. We all know some guru that are great to have on the Ops/Dev Team, as long as he remains in the company. If he leaves, there`s a big risk that some aspects of the environment that he was maintaining went off the door with him. If you have your infrastructure as code, this risk is drastically reduced, as the environment code is safe and sound on your source control system.

Besides of being repeatable, which makes very easy to replicate test environments for certain scenarios, code is testable, allowing the whole team to be safe regarding the stability of the environment being created. There are some tools that enable us to adopt the Infrastructure as code on our daily operations such as PowerShell’s DSC, Puppet, Chef, among others. On this article we will be using chef.

Getting to Know Chef

Chef is an automation platform that transforms infrastructure into code. Whether you’re operating in the cloud, on-premises, or in a hybrid environment, Chef automates how infrastructure is configured, deployed, and managed across your network. You create and test your code on your workstation before you deploy it to other environments.

A funny thing about chef is that most of its components names are related to the gastronomy area. You write cookbooks which includes a set of recipes. If you need something for a recipe, go to the supermarket. Indeed, there are a lot of already built cookbooks on Chef Supermarket, the chef cookbooks repository like NuGet for .Net applications.

To use Chef, you need the following components:

Chef uses Ruby as the language, which is a dynamic, open source programming language with a focus on simplicity and productivity. As its elegant syntax is natural to read and easy to write, the learning curve is quite reasonable.
 
On this article we will use chef client to run our cookbooks. But it is important to understand that Chef includes a complete distributed solution which could work on a wide variety of scenarios, as the image below shows.

To learn more about chef, please refer to https://learn.chef.io.

Chef Supermarket

By accessing https://supermarket.chef.io you will get access to a massive repository of already built chef recipes. On this article we will be using the following cookbooks:
Windows: Used to enable windows features such as IIS, .Net Framework Features and so on.
IIS: Used to configure IIS Components such as Web Applications, Web Sites, Virtual Directories and so on.

Chef in Action

Our mission is very clear: We will get a Fresh Installed Windows 2012 Machine and we will perform all the configurations mentioned on the DSC mentioned on the first section of this article. The machine we will need to have Chef Development Kit or at least the Chef Client installed. You can download it from https://downloads.chef.io/ . On our samples I am going to use ChefDK.

Chef Development Kit(ChefDK) installation

All code listed here will be under this git Repository: https://github.com/pcbl/chef_adventure_works.git. Feel free to clone to take a better look.

Configuring the environment

We will be using chef-client default configuration, which requires us to have a client.rb file on a folder named Chef on the root of C drive. The “C:\chef\client.rb” file contains the basic configuration required for chef to run, as follows:

log_level :info
log_location 'C:\chef\client.log'
chef_server_url 'https://localhost:4000'
validation_client_name 'chef-validator'
chef_zero.enabled true
chef_zero.port 4000
local_mode true
listen false
cookbook_path ['C:\chef_repo\cookbooks','C:\chef_repo\cookbooks\.vendor'] node_name 'localhost'
node_path 'C:\chef_repo\nodes'

 

Basically the file defines some basic configurations like log related information and form where cookbooks should be loaded (‘C:\chef_repo\cookbooks) and the path of the chef-client node(‘C:\chef_repo\nodes’) where a ‘localhost’ json file would point to the chef’s runlist, which defines the execution order of one or more cookbooks. On our case, we will have only one cookbook on the runlist, as follows:

 

{"run_list": [
    "recipe[chef_adventure_works::default]"
  ] }

Something important to notice is that our cookbook folder must be created under the cookbooks folder to work correctly. To do so, let’s run the following command inside the cookboks folder using a command prompt (cmd.exe):

C:\chef_repo\cookbooks>chef generate cookbook chef_adventure_works

The command above will create a cookbook named chef_adventure_works. All steps from the mentioned DSC from the first section of this article will be coveted within this cookbook.

So, to review, everything we have done so far was:

  1. Created a C:\chef folder
  2. Added the C:\chef\client.rb file with the default chef configuration (Code 1)
  3. Created a C:\chef_repo folder with two subfolders:
    1. Cookbooks: Will hold the cookbooks used by the chef node
    2. Nodes: will hold information about the chef-client node
  4. Created the localhost.json inside C:\chef_repo\nodes\ folder (code 2)
  5. Created a cookbook named chef_adventure_works (code 3)

Once all those steps were performed, if you open a command prompt window and type chef-client and hit enter, you would get something similar to the screenshot below:

Run chef-client for the first time

All that is saying is that our cookbook was successfully executed and 0 of 0 resources were updated. This is correct because so far, nothing was done. Now we can start working on our DSC.

Enabling Windows Features

We will start enabling certain windows features. To do so, we will be using the windows cookbook, straight from chef’s supermarket. Refer to it on https://supermarket.chef.io/cookbooks/windows.

To use windows cookbook, we shall add a proper dependency on our cookbook. All we need to do is add the following line on our cookbook meta data, as follows (note the bold code):

name ‘chef_adventure_works’
maintainer ‘The Authors’
maintainer_email ‘you@example.com’
license ‘all_rights’
description ‘Installs/Configures chef_adventure_works’
long_description ‘Installs/Configures chef_adventure_works’
version ‘0.1.0’
depends ‘windows’, ‘~> 1.44.0’

After adding this dependency, we will be able to use the ‘windows’ cookbook recipes. We will use the windows_feature resource to install the windows feature we want by using the PowerShell provider. The code below shows how to install IIS management tools:

windows_feature “Install IIS Management Tools” do  
  feature_name “Web-Mgmt-Tools”  
  action :install  
  all true  
  provider :windows_feature_powershell
end

The code above uses a resource named “windows_feature”. We named this resource “Install IIS Management Tools”. It will use the PowerShell to install (:install action) a windows feature named “Web-Mgmt-Tools” and all its dependencies (meant by the “all true” bit). The “Web-Mgmt-Tools” feature actually install IIS Management Tool. You might be thinking how we got this name. The easiest way to get a windows feature name is to simply run a PowerShell cmdlet name “Get-Windowsfeature”, as the screenshot below shows:

Get-Windowsfeature cmdlet results

Once you get the command results, just create a list of the features you want to install. On our case, we selected the following ones:

Windows Features to be enabled

To proceed and enable all those windows features, we will create a new recipe named enable_windows_features.rb on the recipes folder with the following code:

features = %w(
  Web-Default-Doc
  Web-Dir-Browsing
  Web-Http-Errors
  Web-Static-Content
  Web-Http-Redirect
  Web-Http-Logging
  Web-Custom-Logging
  Web-Log-Libraries
  Web-ODBC-Logging
  Web-Request-Monitor
  Web-Http-Tracing
  Web-Performance
  Web-Stat-Compression
  Web-Dyn-Compression
  Web-Filtering
  Web-Basic-Auth
  Web-CertProvider
  Web-Client-Auth
  Web-Digest-Auth
  Web-Cert-Auth
  Web-IP-Security
  Web-Url-Auth
  Web-Windows-Auth
  Web-Net-Ext
  Web-Net-Ext45
  Web-AppInit
  Web-Asp-Net
  Web-Asp-Net45
  Web-Mgmt-Console
  Web-Scripting-Tools
  NET-Framework-Features
  NET-Framework-Core
  Application-Server
  AS-NET-Framework
  AS-Web-Support
  AS-WAS-Support
  AS-HTTP-Activation
  PowerShellRoot
  PowerShell
  PowerShell-V2
  PowerShell-ISE
)
# Then we install each feature
features.each do |feature|
  windows_feature “Install #{feature}” do
    feature_name feature
    action :install
    provider :windows_feature_powershell
  end
end

The code is quite easy to understand. All we do is declare an array of strings with all windows features we want to enable. After that we just iterate over that array, installing each item individually.

In order to call this recipe, let’s change the contents of the default.rb file as follows:

include_recipe “#{cookbook_name}::enable_windows_features”

The line above just includes the enable_windows_feature on the default.rb file. If you refer to the Code 2 file contents you will notice that the default.rb is the recipe being called from the node’s runlist. So, all we have done is included our enable_windows_features.rb ina  way so that it gets called when the default.rb recipe is called.

Now, open a command prompt with administrative privileges. Administrative privileges are required as we are actually modifying system settings which demands higher privileges.

Run command prompt with administrative privileges

Once you run the command prompt, we will get a different result than the Figure 2 – Run chef-client for the first time. Actually we get an error:

“No such cookbook: windows”

The reason we get this error is because we actually added a dependency to windows cookbook, but we did not download it to the vendor folder. This is what we will do next.

Downloading dependencies

Once we are depending on a vendor cookbook, we actually need to download it to our cookbooks/. vendor folder. This is the place that all third-part cookbooks get downloaded to. We actually defined this on the cookbook path variable from Code 1 file contents . We will be using Berkshelf to download the dependencies via the berks command. Please refer to http://berkshelf.com for more information. Berkshelf is actually deployed together with the ChefDK. Just run the command below to download the dependencies:

berks      vendor      “C:\chef_repo\cookbooks\.vendor”      -b
“C:\chef_repo\cookbooks\chef_adventure_works\Berksfile”

berks vendor command execution

One important thing to be made here is that the berks command will actually consider our cookbook as a vendor cookbook. So, after run the command, just go to the cookbooks/.vendor folder and remove our chef_adventure_works cookbook.

Delete the chef_adventure_works folder from .vendor after run berks

Enable Windows Features (cont.)

As windows cookbook was properly downloaded on the previous section. Let’s just run a chef-client once again on our administrative command prompt. Now we will see a different result, showing that the windows features were enabled successfully.

Chef-Client execution after Windows Features were enabled

As you can see, it took about 10 minutes to run the command, where 28 of 44 resources where modified. Basically, each resource we called (on this case, each call within our foreach repeat loop), represents at least 1 resource. So, if a certain feature is already enabled, Chef would do absolutely nothing because that resource is already on the state we want. That is the reason why we have got 28 instead of 44! With that in mind, what happens if we run the command once again? Let’s see the results:

Chef-Client second run after features were already installed

As you can see, now we got 0 resources of 44 updated. Additionally, it is quite clear to see that now it just took just 1 minute to run the whole code again. Basically we will get exactly the same result if we run the code again. This means our code is actually idempotent, a quite crucial concept which means that multiple applications of the same action do not have side effects on the system state. So, it doesn’t matter how many times you run the cookbook, as long the resource you are trying to configure is already configured, nothing will happen! Bear that in mind when you are building your own cookbooks. Being idempotent is quite critical to reduce overhead on the server, mainly because usually chef-client will be running on the server from time to time to assure that DSC is always fulfilled.

Installing SQL Server

We just enabled all windows features we are going to need. Next step is to actually install SQL Server and restore the adventure works database on top of it. Luckily, SQL server supports a complete installation by non-interactive commands. All we need to do is to have a configuration file which defines all setup parameters. Once we have that, all we need to do is to call the setup with a parameter pointing to our installation configuration file. For more information about this procedure, please refer to this MSDN article: https://msdn.microsoft.com/en-us/library/dd239405(v=sql.120).aspx

First thing we will do to create a folder on the machine we are going to install SQL Server with all files that should be used to install it. For now, we just need the SQL Server ISO file. We will create a folder named “C:\Deploy\” and will add the file over there. On this article I will be using the SQL Server 2014 Developer Edition with Service Pack 2 ISO file.

Then we must generate an installation .INI file with the features we want to install. To do that, just run the SQL Server and follow the setup steps up to the “Ready to install” Step. At that step you will see a mention to the created INI file with all settings you configured.

Follow the installation up to the “Ready to install”
Step to create an Installation INI File

Naturally you can use the INI file that was created on GitHub case to save some time. On that INI file I only SQL Server Database Service without components such are reporting services, integration services will be installed.

We will create a new recipe on our cookbook named install_sql_server.rb with the code below:

full_path_iso_file =
‘C:\Deploy\en_sql_server_2014_developer_edition_with_service_pack_2_x64_dvd_8967821.iso’
files_folder = File.expand_path(‘../files’, File.dirname(__FILE__))
setup_ini_file = “#{files_folder}/SQL_Server_Setup_Config.ini”

   reboot ‘if_pending’ do
  action :reboot_now
  reason ‘There is a pending reboot.’
  only_if { reboot_pending? }
end

powershell_script ‘Install SQL Server’ do
  code <<-EOH
  Mount-DiskImage -ImagePath #{full_path_iso_file}
  $driveLetter = (Get-DiskImage #{full_path_iso_file} | Get-Volume).DriveLetter
  iex “$($driveLetter):\\setup.exe /CONFIGURATIONFILE=#{setup_ini_file} /IAcceptSQLServerLicenseTerms /SAPWD=!sql2014”
  Dismount-DiskImage -ImagePath #{full_path_iso_file}
  EOH
  not_if <<-EOH
    Test-Path “HKLM:\\Software\\Microsoft\\Microsoft SQL Server\\Instance Names\\SQL”
  EOH
  notifies :reboot_now, ‘reboot[if_pending]’, :immediate
end

Basically we are using a powershell_script resource including a PowerShell script that does the following steps:

  • Reboot Before Install, only if there’s a boot pending
  • Mount a Disk drive(Mount-DiskImage) based for the provided ISO File (full_path_iso_file variable)
  • Get the Driver letter(Get-DiskImage) to which the ISO Image was mounted to
  • Call Setup (i.e.) with the CONFIGURATIONFILE parameter (pointing to the setup_ini_file variable). We are enabling mixed authentication mode with ‘sa’ password being ‘!sql2014’. Please note that providing clear text password is not a good practice. We are using it just as a sample. The best way would be to use solely Windows Authentication of provide the password on a safer manner (you could use an environment variable for example)
  • Unmount the drive image
  • All the code will be executed ONLY IF SQL Server is not installed already installed. We check that just checking if the registry key “HKLM:\Software\Microsoft\Microsoft SQL Server\Instance Names\SQL” exists.
  • Once the resource runs, we will trigger a reboot if necessary, this is being done via the notifies call.

Once we have this code in place, it is time to include it on default.rb recipe, as done previously on Code 7 – recipes/default.rb file contents. After that the code should look like this:

include_recipe “#{cookbook_name}::enable_windows_features”
include_recipe “#{cookbook_name}::install_sql_server”

Once you make a new chef-client run, you will notice that it will take about 15-20 minutes (depending on how large is the machine you are running) to install SQL Server. If a reboot is needed, you will notice that the machine will start automatically.

SQL Server is being installed(drive mounted)

Restoring database backup

Now that we have SQL Server Installed it is time to restore the database backup we want to use on our application. You can download the backup of adventure works from here: https://msftdbprodsamples.codeplex.com/releases/view/55330. Just download it and add to the C:\Deploy folder.

Backup file added to the Deploy folder

In order to connect to SQL Server and run the restore script, we will be using the PowerShell invoke-sqlcmd cmdlet, which was installed with SQL Server. Check the code below:

directory ‘C:\AdventureWorksDB’ do
  action :create
end 

files_folder = File.expand_path(‘../files’, File.dirname(__FILE__))
sql_command = ::File.read(“#{files_folder}/restore_database.sql”)
bak_file_path = ‘C:\Deploy\AdventureWorks2012-Full Database Backup.bak’

powershell_script ‘ Restore AdventureWorks database’ do
  code <<-EOH
$query = @’
#{sql_command}
‘@
Import-Module SqlPs
Invoke-Sqlcmd -ServerInstance localhost -Database ‘master’ -QueryTimeout 3600 -Query $query -Username ‘sa’ -Password ‘!sql2014’
$timestamp = Get-Date -Format yyyy.MM.dd.HH.mm
$file_after_backup = ‘#{bak_file_path}.’ + $timestamp
Move-Item -Path ‘#{bak_file_path}’ -Destination $file_after_backup
EOH
  only_if { ::File.exist?(bak_file_path) }
end

As the code shows, first we create a folder named ‘C:\AdventureWorksDB’, where the backup will be restored to.  Please note that, following the principle of being idempotent, the Directory resource will actually do nothing if the folder already exists. Basically, the contents of restore_database.sql will be loaded and executed it via Invoke-sqlcmd. Once that is done we rename the .bak file by adding a timestamp suffix. We do this because we use the .bak file name existence as our idem potency condition (refer to the only_if bit). You may also have a look on the restore_database.sql file.

DECLARE @bakFile NVARCHAR(400),
        @mdfFile NVARCHAR(400),
        @ldfFile NVARCHAR(400)
set @bakFile =  ‘C:\Deploy\AdventureWorks2012-Full Database Backup.bak’
set @mdfFile = ‘AdventureWorks2012_Data’
set @ldfFile = ‘AdventureWorks2012_Log’

IF EXISTS (SELECT name FROM sys.databases WHERE name = ‘AdventureWorks’)
 EXEC (‘ALTER DATABASE AdventureWorks SET SINGLE_USER WITH ROLLBACK IMMEDIATE;’); 
RESTORE DATABASE AdventureWorks
FROM DISK = @bakFile WITH
MOVE ‘AdventureWorks2012_Data’TO
‘C:\AdventureWorksDB\AdventureWorks2012_Data.mdf’,
MOVE ‘AdventureWorks2012_Log’ TO
‘C:\AdventureWorksDB\AdventureWorks2012_Log.ldf’
ALTER DATABASE AdventureWorks SET MULTI_USER
GO

The T-SQL script is also quite easy to understand. All it does is restore the database to the folder C:\AdventureWorksDB. The script will work even if the database is in use as we are changing it to SINGLE_USER before we actually restore and set it back to MULTI_USER when restore was complete.

To run the restore operation, we must include it on our default recipe. After that the code should look like this:

include_recipe “#{cookbook_name}::enable_windows_features”
include_recipe “#{cookbook_name}::install_sql_server”
include_recipe “#{cookbook_name}::restore_sql_database”

When we run another chef-client, the database will be successfully restored. Assure that the .bak file is on the ‘C:\Deploy\AdventureWorks2012-Full Database Backup.bak’ path. Once the database was restored it will be renamed as defined on the restore_sql_database recipe.

AdventureWorks Database successfully restored

Configuring IIS

The last recipe we will be writing is going to configure a web application on IIS. In order to do that, we will be using the ‘IIS’ vendor cookbook. In order to do that, we must add a dependency on metadata.rb as follows:

name ‘chef_adventure_works’
maintainer ‘The Authors’
maintainer_email ‘you@example.com’
license ‘all_rights’
description ‘Installs/Configures chef_adventure_works’
long_description ‘Installs/Configures chef_adventure_works’
version ‘0.1.0’
depends ‘windows’, ‘~> 1.44.0’
depends ‘iis’, ‘~> 4.2.0’

Once the dependency was added, you will need to run berks once again, as we made on Code 8 . Do not forget to delete the wrongly the chef_adventure_works from .vendor folder, as described on the Downloading dependencies section.

Once the dependency was defined, it is our turn to configure IIS by creating a web application for our sample application. You can check the code below:

directory ‘C:\AdventureWorksWebApp’ do
  action :create
end

files_folder = File.expand_path(‘../files’, File.dirname(__FILE__))
file ‘C:\AdventureWorksWebApp\Default.aspx’ do
  content ::File.read(“#{files_folder}/default.aspx”)
  action :create
end
iis_app ‘Configure AdventureWorks Web App’ do
  action :add
  site_name ‘Default Web Site’
  path ‘/AdventureWorks’
  physical_path ‘C:\AdventureWorksWebApp’
end

Basically all we do is create a directory ‘C:\AdventureWorksWebApp’. After that we create a ‘Default.aspx’ file with the contents below (read from cookboof files/default.aspx):

<%@ Page language=”c#” %>
<%@ Import Namespace=”System.Data” %>
<%@ Import Namespace=”System.Data.SqlClient” %>
<script runat=”server”>
void Page_Load(Object sender, EventArgs e)
{
   SqlConnection cnn = new 

SqlConnection(“server=.;database=adventureworks;uid=sa;pwd=!sql2014”);
   SqlDataAdapter da = new SqlDataAdapter(“SELECT NAME,PRODUCTNUMBER FROM PRODUCTION.PRODUCT”, cnn);
   DataSet ds = new DataSet();
   da.Fill(ds, “product”);
   Repeater1.DataSource = ds.Tables[“product”];
   Repeater1.DataBind();
}
</script>
<html>
<body>
   <form id=”WebForm2″ method=”post” runat=”server”>
      <asp:Repeater id=”Repeater1″ runat=”server”>
         <ItemTemplate>
         <%# DataBinder.Eval(Container.DataItem,”NAME”) %><br>
      </ItemTemplate>
      </asp:Repeater>
   </form>
</body>
</html>

All this code does is to connect to the AdventureWorks database and list the product names from the Production.Product table. The ASPX code is not nice but it is just a sample to show that the environment is up.

As soon as you run a new chef-client run, you will see that the application was configured on the IIS(pointing to C:\AdventureWorksWebApp) and that if you browse to http://localhost/adventureworks you will actually see the products listed as on the screenshot below.

Web Application Running successfully

Once that was done, we covered all topics from our DSC! All we need to do is to assure that the machine state remains as desired. To do that, we will install the Chef-Client Service which will run it automatically every 30 minutes to assure the environment is configured as desired.

Installing and Configuring Chef Service

In order to configure the chef client service, first we need to add a dependency on the chef-client cookbook. We do this by adding the following line on the metadata.rb file. Remember to run berks once again, as made on Code 8 – Downloading dependencies and to delete the wrongly the chef_adventure_works from .vendor folder.

name ‘chef_adventure_works’
maintainer ‘The Authors’
maintainer_email ‘you@example.com’
license ‘all_rights’
description ‘Installs/Configures chef_adventure_works’
long_description ‘Installs/Configures chef_adventure_works’
version ‘0.1.0’
depends ‘windows’, ‘~> 1.44.0’
depends ‘iis’, ‘~> 4.2.0’
depends ‘chef-client’, ‘~> 5.0.0’

Now, instead of including a recipe on our default.rb file, we will actually just add one more cookbook on the nodes/localhost.json file from configuring the environment section.

{“run_list”: [
    “recipe[chef_adventure_works::default]”,
    “recipe[chef-client::windows_service]”
  ] }

Now if you run another chef-client run, you will see that chef is now configured to run as a service, every 30 minutes (1800 seconds). As the image shows:

Chef Service installed

The service will assure that the machine is always on DSC. For example, if someone disables a windows feature which is relevant for the application (IIS, for example), it will be corrected on the next chef-run.

Conclusion

This article went from a brand new Windows 2012 installation to a complete running solution including SQL Server installation, database restore and IIS configuration. The beautiful thing here is that once the cookbook is built it is incredible fast to replicate it to all environments, including the production environment. The cookbook fully automates the solution environment creation, reducing drastically any risk of a difference among all environments. Do not hesitate to check the cookbook full code on https://github.com/pcbl/chef_adventure_works.

One thing that was not covered on this article was the adoption of cloud based solutions using providers such as Amazon’s AWS or Microsoft’s Azure. On an optimal scenario it is possible to automate the full deployment of the solution with the help of Continuous Integration Solutions such as Jenkins, Go Server, Team City, etc. The general idea would follow the workflow below:

  1. Codes get committed
  2. Continuous Integration Tool gets notified
  3. Automated Build take Place
  4. If Build was successful, start Testing (Unit Testing, Integration Tests…)
  5. If tests were OK, start application deployment
  6. Machine requested on cloud provider (AWS, Azure)
  7. Once machine is available, install chef
  8. Run Deployment Cookbook
  9. Deployment successful? Promote the machine as new application Server

For some, this workflow would look like a science fiction, but the reality is that this is happening right now and the possibilities are incredibly positive. If in one hand companies can reduce costs drastically by automating the environment setup, on the other hand it creates a highly testable environment, supporting the operations folks to assure everything is fine while developers are free to drive innovation.