Categories
Azure Cloud

Creating a Virtual Machine using Azure Portal

At work, we use Azure as our primary cloud provider. Our infrastructure runs on Azure and there is a need to learn Azure and its services. With my AWS background, I believe it is going to help me gain a deeper knowledge of Microsoft’s cloud. In the past I have written about Azure but now I want to start preparing for Azure certifications. I need to drink Azure Kool-Aid. I want to start this process by creating a virtual machine.

Let’s create a virtual machine using Azure Portal.

Azure Portal - Create Virtual Machine - Basics
Azure Portal – Create Virtual Machine – Basics

Fill out your basic details like operating system under Image. Here you have linux and windows machines. For this post, I’m choosing an ubuntu server. After completing required inputs on basics tab, I move on to the disk section. Here I accept default parameters. I continue with networking, management, advanced, tags, and finally review and create.

Azure Portal - Virtual Machine - Review
Azure Portal – Virtual Machine – Review

After creating my virtual machine, I noticed that my server is ready within 2 minutes. At this moment, I’m curious what resources were created. At the top of the list, I see my virtual machine. I also see network interfaces, storage account, network security group, virtual networks, and public ip addresses.

Since I’m not going to use this virtual machine, I will delete this resource so I don’t get charge for it. In a future post, I’m going to explore the .NET SDK and create a virtual machine writing C# code.

Categories
Azure Cloud

Azure Resource Group

AWS has been my cloud provider for many years. I have used it to host .NET applications, SQL Server databases, and other services like email, storage, queues. I have gained valuable experience in this platform but it’s time to play with Azure. In this post, I want to share how to create resource groups and their benefits.

First, let’s create a resource group. After you login to the Azure portal, click on Resource groups on the left menu. Now click on the Add button and enter a valid resource group name, select a subscription and location. For this example, I’m using dev-resource-group, pay-as-you-go, and South Central US to create my resource group.

A resource group is a container that hold resources like web sites, storage, databases in a group so we can manage them as a single unit. I like to think of resource groups as my dinner plate. I use a plate to hold my food (meat, vegetables, desert, etc) and once I’m done eating I can throw away the plate along with any food that is left.

Now let’s add a new app service. Click on App Services link on the left menu and click add. In the web apps section, select WordPress on Linux and click Create button. Enter required fields and make sure you select resource group created in the previous step.

Just to verify that our resource group is associated with our new wordpress site, click on Resource groups again and select the resource group. I see 3 items associated with my resource group: app service, azure database for mysql server, and app service plan.

Let’s create a new app service. Choose web app, create, and add all required fields. Make sure you select the same resource group from previous step. In the OS section, I select Docker and configure your container.

Now our resource group has a total of 4 items in it. These items belong to my dev environment and now I’m ready to delete this environment since I no longer need it. Select the resource group and click on the 3 dots and select delete resource group.

Type the resource group name and click on Delete button. After a few minutes, the 4 items associated with our resource group will be deleted along with our resource group. As you can see, resource groups allows us to group resources in a cohesive way so we can manage these resources in a better way. I have seen resource groups for different environments like dev, test, and production. Once you are done with your resources, just delete the resource group and it will save you a lot of time and effort.

Categories
.Net ASP.NET ASP.NET MVC AWS CI Cloud

.NET Build Server using Visual Studio Community 2017

In a previous post, I wrote about building a .NET continuous integration server using Windows Server 2016, Jenkins, and Build tools. For this setup, I avoided installing any Visual Studio software. In my opinion, it was a simple process to get everything installed and building correctly. Now that Visual Studio 2017 has been released, I want to setup a build server using the community edition. Here are the 7 steps I took to get a .NET build server using Visual Studio 2017 Community edition, Jenkins, Git, running on Windows Server 2016.

1. Launch a new Windows Server 2016 instance. I’m currently investing on learning AWS so I’ll use it as my cloud provider. Feel free to use Azure, Google Cloud or your own server.

2. Download and install Visual Studio 2017 Community edition. For this tutorial, I selected ASP.NET and web development option during the installation.

3. Download and install Jenkins. I selected version 2.46.1 for this tutorial. After installing Jenkins, you have the option to install recommended plugins or select them one by one. For this tutorial, I went with the option to installed recommended plugins.

4. Download and install Git. If you have a different source control tool, go ahead and install it. After installing Git, I went to the Global Tool Configuration section and updated the path to C:\Program Files\Git\bin\git.exe.

5. Install MSBuild plugin. Go to Manage Jenkins section and select plugins. From the available tab, find MSBuild and install it. I also updated the path in the Jenkins settings to C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\msbuild.exe.

6. Create a new Jenkins job. I used the free style option. For the settings I use Git as my source control tool. Since I’m building a .NET solution, I selected the build action that uses MSBuild and give it the solution or project name.

7. Trigger a new build. If all of the steps above were done correctly, you should have a successful build.

Installing Visual Studio Community edition makes it easier to have a build server setup. After installing it, you get the .NET framework needed plus the MSBuild executable. In my previous setup, I had to install more software. Build Tools 2015, .NET framework 4.5.2, 4.6.2. Hopefully this post helps you setup a reliable build server for your .NET applications.

Categories
AWS Cloud

What resources get created by AWS Elastic Beanstalk?

Creating a new beanstalk application is a simple process. Give it a name and select a platform. That’s it. Simple. However, it is important that we understand what resources get created when we launch a new AWS Beanstalk app. In this post, we will explore all resources created by AWS Beanstalk.

What is AWS Beanstalk?

Before we dive into the details, let’s take a moment and explain what Beanstalk is. This is how AWS defines Elastic Beanstalk, “AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time. There is no additional charge for Elastic Beanstalk – you pay only for the AWS resources needed to store and run your applications.”

Load Balancers

When you launch a new Beanstalk app, you have 2 options available for your scaling needs: single instance and load balancing, auto scaling. If you select single instance, your environment will always run with 1 instance only. You might use this setting to test a simple web app or service. If your app needs to scale out, you can create your environment with load balancing, auto scaling. By default, the minimum instance count is set to 1 and 4 for the maximum instance count. You can also change these values by going to the configuration section.

To see the load balancer created, go to EC2 console and click on the Load Balancers section on the left side menu. Here you can see the load balancer name, dns settings, VPC ID, availability zones, type, and created date. Most of these settings can be controlled by the configuration settings in your Beanstalk console.

Auto Scaling

For the auto scaling settings, we find a launch configuration and auto scaling group created. The launch configuration created contains information about the type of instance required in the auto scaling group. In my example, I see AMI ID ami-365ae520, Instance Type t1.micro, and other settings.

In the auto scaling groups section in the EC2 page, you will see a link to the launch configuration, min and max instances, availability zones, default cooldown, and other sections. Most of these settings can be changes from your Beanstalk configuration.

EC2 Instance

If you go to the Instances section, you will see that a new instance was created. In my case, I see the instance name, instance ID, public DNS, and other settings as well. If you have a scaling trigger set, other instances could be created. The most common option is to scale out based on CPU usage. For example, you can specify that a max of 75% CPU utilization will scale up and a min of 25% will scale down (terminate instances).

In addition to these resources, Beanstalk also creates IAM roles, S3 objects, security groups, and others. Creating a new AWS Elastic Beanstalk is a very simple process but understanding what gets created behind the scenes is crucial.

 

Categories
AWS Cloud General

How to migrate your wordpress blog from Bluehost to Amazon Lightsail

My first blog post was created back in April 25, 2014. I had used Godaddy before but I wanted to try a different host. After my research I decided to host a WordPress blog using Bluehost. Bluehost is a great host and I highly recommend them. One of the great features they have is automatic backup of your WordPress site and MySQL database.

Why migrate?

Lately, I’ve been studying to take the AWS Certified Developer Associate exam. To force myself to learn AWS in depth, you have to use it frequently. That’s why I decided to migrate my blog from Bluehost to Amazon Lightsail. With Lightsail, it is very simple to create a WordPress site. You select the WordPress for your instance image, select a plan, and finally name your instance.

Here are the steps I took to migrate my WordPress blog from Bluehost to Amazon Lightsail:

Website Backup

Since Bluehost provides daily backups of my WordPress site, the only thing I had to do was download the public_html folder. This folder contains all files needed to run WordPress. If you are using a different host, you can find your WordPress folder by searching for wp-admin, wp-content, and wp-includes folders. Once you find these folders, backup the parent folder.

MySQL Database Backup

For the database backup, I also downloaded the backup files provided by Bluehost. There is a sql script to create the database and another sql script that contains the actual data we want to migrate to Lightsail. In my case, I only kept the file that contains the insert statements to migrate the data.

Create Lightsail Instance

This is a simple process. Go to Amazon Lightsail console, and click on Create Instance. Then select Apps + OS, WordPress, select a plan, and name your instance. It is that simple.

Setup FTP Access

By default, Amazon Lightsail has ports 22, 80, and 443 open. However, if you try to connect to your instance, it will not work. To have FTP access to your instance, download your default private key. Once you have the .pem file, upload it to your FTP client. In my case I used Filezilla and was able to connect and copy files.

Copy files and Migrate Data

Since Amazon Lightsail created a new WordPress site, I refused to restore those files from my Bluehost backup. I compared both sites and I noticed files that were only used by Bluehost. I decided to only copy files needed for this migration to work. The first file was wp-config.php, which contains the database username, password and other settings. I modified this file to contain Lightsail’s database information. I also copied the sql script to migrate the data. After copying this file, I was able to connect using SSH and ran the script against the MySQL database.

DNS Changes

With the data migrated from my old Bluehost database to the new database, it was time to make the DNS changes. I first went to Bluehost and update the nameservers to point to Amazon. After doing that, I created a DNS zone in Lightsail console and A records for solutionsbyraymond and www.solutionsbyraymond.com.

As you can see, it was a very simple process to migrate from Bluehost to Amazon Lightsail. At the beginning it took hours to read the documentation but at the end I had no issues with the migration. I hope this helps other developers trying to migrate to Lightsail.

Categories
AWS Cloud python

What I learned by reading the AWS CLI codebase

aws

I’m a big fan of Amazon, specially the Amazon Web Services. In this post I want to share what I learned by reading the AWS CLI code base. For those of you not familiar with those acronyms, it stands for Amazon Web Services Command Line Interface. AWS CLI allows you to manage your AWS services through a command line interface. You can download the cli at http://aws.amazon.com/cli/. If you are using windows, there is an installer available. If you are using Mac or Linux, you can install it with pip by running this command

pip install awscli

After installing and configuring it, you can use the different services available to manage your resources. For example to list all of my s3 buckets, I can run this command, “aws s3 ls”. See the picture below to see the results.

aws cli

Now that you know more about the aws cli, let’s dive into the code base. The project is hosted at github and you can read the code at https://github.com/aws/aws-cli. It is written in Python. It has unit, functional, and integration tests. It has an extensive set of examples on how to run commands.

The code is integrated with travis ci and it is tested against 4 versions of Python. The code runs against 2.6, 2.7, 3.3, and 3.4 versions. This file also run the installation script and tests scripts.

There is also a CLI Dev Version section on the home page of the project. It gives you enough information to setup your project and start contributing back to the project.

And finally, the project has documentation available. I hope this posts give you a brief introduction to Amazon Web Services Command Line Interface’s code base.