Categories
Azure

Resources to get you started with Azure

The cloud is here to stay. And with that in mind, we, as software engineers, have to keep our cloud skills up-to-date. AWS and Azure lead the cloud computing space in terms of services and revenue. For the last few years, I have gained hands on experience with AWS and was able to get certified as a developer associate. Now it is the perfect time to gain a deeper knowledge about Azure services. In this post, I’m going to share resources to get you up-to-speed with Azure services.

First, open up an account with Azure by visiting the Azure home page. The home page has a lot of resources to learn more about Azure solutions, products, documentation, pricing, training, marketplace, partners, support, blog, and more.

One of my favorite resources is to visit the get started guide for Azure developers page. It contains quickstarts, tutorials, samples, concepts, how-to guides, references, and other resources. I highly recommend downloading the sdk and building small apps. Nothing beats getting your hands dirty with code that calls Azure services. Currently Azure has SDKs in .NET, Node.js, Java, PHP, Python, Ruby, and Go.

Another resource that I use frequently is to read Azure applications hosted on Github. When I’m unable to come up with a solution, I search on Github for existing solutions.

Azure Friday is another great resource to learn more about Azure services and offerings. Scott Hanselman and company have created high-quality videos showing new features. On average, these videos are 15 minutes long.

A Cloud Guru has courses to help you get started with Azure. They have an introduction course and also courses that help you achieved certifications.

That’s it for this post. In future posts, I will target specific services and share my adventures learning Azure.

Categories
AWS Lambda

My First AWS Lambda Using .NET Core

As I prepare for the AWS Certified Solutions Architect – Associate exam, I have a need to play with more services. It’s crucial to gain hands on experience with these AWS services. It’s not enough to just read white-papers and faqs. I’ve heard good thing about AWS Lambda and now it’s time to build something with it. In this post I want to share how I was able to create my first Lambda function using .NET Core.

Before we dive into AWS Lambda, let’s understand what it is. Lambda is a service that allows you to run code without thinking about provisioning or managing servers. You upload your code and AWS handles the rest. Nice! Here is the official summary, “AWS Lambda lets you run code without provisioning or managing servers. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.”

Now that we know what Lambda is, let’s install the require software to create a Lambda function using .NET Core. First install the lambda templates using nuget. Using a terminal or command prompt, type “dotnet new -i Amazon.Lambda.Templates”. It will install lambda templates so you can get up and running quickly. To test it, type “dotnet new” and press enter. You should see the following templates:

As you can see from the screenshot above, there are 2 categories of templates: lambda functions and lambda serverless. To keep it simple, I’m going to use a simple lambda function that integrates with S3. Now we need to install the AWS Lambda global tool. Using a terminal/command, type “dotnet tool install -g Amazon.Lambda.Tools”.

With the required software installed, it’s time to create our first Lambda using .NET Core. Using a terminal/command, create a new directory called “firstLambda” and cd into it. Now type dotnet new lambda.S3 to create a new function using AWS lambda templates. After creating the function, we need to update a config files with a profile and region. Using a text editor or IDE, open up the new project and update the profile and region setting in aws-lambda-tools-defaults.json.

AWS Lambda will use these settings to deploy and run your function. Let’s take a look at the Function.cs file.

The constructor takes an IAmazonS3 object and the async method FunctionHandler is where our main logic lives. Our lambda function is triggered by a S3 event like a delete object or put object event. Using the same event information, we retrieve the object’s metadata using GetObjectMetadataAsync and finally returning the contentType.

Let’s deploy our first lambda function to AWS using the CLI. Using a terminal/command window, type “dotnet lambda deploy-function agileraymond-1st-lambda”. I’m using agileraymond-1st-lambda as my function name. This command uses the profile and region in our config file so you have to make sure permissions are set correctly. Otherwise you will get errors. Also the command will ask you to provide a role or give you the option to create a new role. If you want to verify that your function made it to AWS, check the AWS lambda console.

To test our new lambda function locally, we can use the test project that was created along with our new function.

Go back to the terminal window and type “dotnet test” to run the integration test. If everything is setup correctly, you will see 1 passing test. That’s it for this post. In a future post, I’m going to test it using the AWS console.

Categories
AWS

Understanding IAM policies

One of the most critical components in any system is security. In AWS, security is at the top of their list. With Identity and Access Management, you can create users, roles, policies, and groups to secure your AWS resources. In this post, I’m going to share how to secure a S3 bucket by creating a new user with limited access. Let’s get started.

Create a new user

To create a new user, sign in to the aws console and select IAM. Select users from the left menu and click Add User. Add a user name and select programmatic access in the access type section.

Click Next. Since we don’t have a policy in place, click Next again.

Now it’s time to review our new user. Notice that aws is displaying a warning message that this user has no permissions. Click next.

We’re in the final step in creating our new user. Click on the Download .csv button. This file will contain the access key id and secret access key. We’ll use these items in the aws cli tool to access S3 buckets. You can also click on the show link below the secret access key header.

Now that we have our user ready, it’s time to create a new policy with limited permissions to a S3 bucket. Click on the Policies link on the left side menu. Click on Create Policy.

There are 2 ways to create your policy: using the visual editor and using a JSON file. For this exercise, I’m going to use a JSON file to specify the policy. Click on JSON tab next to Visual editor tab and paste below JSON.

This simple policy is allowing access to S3 PutObject action to a bucket named agileraymond-s3. As you can see, this policy is limited to what it can perform. AWS recommends that you follow the principle of least privileges. Only give access to the resources that your application needs. Click on Next and finally create your new policy.

With our new user and policy in place, we have to link our user to this new policy. Select your user and click on Add permissions button.

Click on the attach existing policies directly tab and filter policies by selecting customer managed from the filter menu next to the search input.

Click next and review your changes. And finally add permissions. We’re ready to test our new user and its permissions. Let’s use AWS CLI to test our new user. Using a terminal/command prompt, type aws configure and add access key, secret access key, region, and format. Make sure you select the same region where your resources are. In my case, I selected us-east-1 because that’s where my bucket resides.

Now, type “aws s3 ls” in your terminal window. You should see an error since we don’t have permissions to list. We only have access to PutObject for a bucket. To upload a file to our S3 bucket, type aws s3 cp myfile.txt s3://yourbucketname. If you go back to the aws console, you should see myfile.txt inside your bucket.

In conclusion, you have to secure your resources by default. Create new users with limited permissions. Give them access to resources that they need. See you next time.

Categories
AWS General

Host a website using AWS S3

Simple Storage Service was one of the first services offered by AWS. With S3 you can store your files in the cloud. In addition to storing your files, S3 allows you host a static website. In this post, I will share how to accomplish this task using the S3 console.

First, login to the aws console. Now go to the S3 console and create a bucket. To keep it simple, a bucket is like a folder or directory in your computer. For this example, I’m using agileraymond-web for my bucket name and US Virginia for my region. Click create button to create your bucket. With our bucket in place, we can enable it to host a static site. Select your bucket and click on properties tab.

Now click anywhere in the static website hosting section and select Use this bucket to host a website. I’m going to use index.html for my index page and error.html for my error page. Click save. Go ahead and create these 2 html files. To upload these files, click on the overview tab and click upload.

Add your files and click on upload button. In the overview section of your bucket, you will see 2 files. Currently the bucket and these 2 files are private. Since we are hosting a static website and other people want access to this site, we have to update the bucket permission. Go to the bucket permissions’ tab and select bucket policy. Copy and paste the below policy. Make sure to update the resource name. In my case, my bucket name is agileraymond-web but your’s will be different.
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::agileramond-web/*"
]
}
]
}

Click save. After saving your policy, you will see the following message: “This bucket has public access. You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket.” For now, ignore this warning message since this bucket is acting as a public website. This policy allows all object placed in my bucket read access. It is time to test our new website. To get the URL, go to bucket properties and click on static website hosting. Next to the endpoint you will find the url. Copy and paste it in a new browser window and add /index.html to the end of the url. If everything is setup correctly, you will see the index.html page.

To test the error page, go ahead and delete index.html. After deleting index.html, try to browse to index.html. And now you should see the error page since index.html doesn’t exist anymore. As you can see, it’s very easy to create a static website using S3. See you soon!

Categories
General

How I landed my first job in IT


Before I tell you about my first job in IT, let me give you some background information. During my last year at Southern Methodist University, I got my resume ready to start applying for different IT jobs. I was able to attend a couple of job interviews but none of those interviews resulted in job offers. I graduated in May of 2001 and decided to take a break from my job search. I decided to continue working with my parents in their small furniture store. From 2001 to 2008, I devoted my time to improve the store and increase sales. However, the store was in a bad financial position. My brother, JR, secured a job with the City of Dallas as a code inspector. After my brother left the store, I also started applying for IT jobs. I was desperate to get into IT. So I started applying to dozens of places and went to dozens of interviews. Most of the hiring managers told me that they were looking for more experienced developers. My only experience at that time was school projects and applications I built for the furniture store. I was very disappointed and almost gave up my job search again. But this time I was determined to get a job as a software developer or any position in IT. I posted my resume in different job sites like dice, monster, and others.

I received a called from James Paul, co-founder of PrintPlace.com. I couldn’t believe that someone was calling me about a job in IT. He gave me a brief description of the job and asked me to come to their offices for a face to face interview. The next morning I met James and Nic. The interview went well and the final step in the process was to speak with John. He was the software architect and I answered most of the questions correctly. Finally I spoke with Shawn, founder of PrintPlace and he offered me the job. I was so happy. Finally I was going to start my career as a software developer. In this role, I wore many hats, desktop support, setup phones, setup servers, and some .NET coding.

Now it’s your turn. How did you landed your first job in IT?

Categories
Azure Cloud

Azure Resource Group

AWS has been my cloud provider for many years. I have used it to host .NET applications, SQL Server databases, and other services like email, storage, queues. I have gained valuable experience in this platform but it’s time to play with Azure. In this post, I want to share how to create resource groups and their benefits.

First, let’s create a resource group. After you login to the Azure portal, click on Resource groups on the left menu. Now click on the Add button and enter a valid resource group name, select a subscription and location. For this example, I’m using dev-resource-group, pay-as-you-go, and South Central US to create my resource group.

A resource group is a container that hold resources like web sites, storage, databases in a group so we can manage them as a single unit. I like to think of resource groups as my dinner plate. I use a plate to hold my food (meat, vegetables, desert, etc) and once I’m done eating I can throw away the plate along with any food that is left.

Now let’s add a new app service. Click on App Services link on the left menu and click add. In the web apps section, select WordPress on Linux and click Create button. Enter required fields and make sure you select resource group created in the previous step.

Just to verify that our resource group is associated with our new wordpress site, click on Resource groups again and select the resource group. I see 3 items associated with my resource group: app service, azure database for mysql server, and app service plan.

Let’s create a new app service. Choose web app, create, and add all required fields. Make sure you select the same resource group from previous step. In the OS section, I select Docker and configure your container.

Now our resource group has a total of 4 items in it. These items belong to my dev environment and now I’m ready to delete this environment since I no longer need it. Select the resource group and click on the 3 dots and select delete resource group.

Type the resource group name and click on Delete button. After a few minutes, the 4 items associated with our resource group will be deleted along with our resource group. As you can see, resource groups allows us to group resources in a cohesive way so we can manage these resources in a better way. I have seen resource groups for different environments like dev, test, and production. Once you are done with your resources, just delete the resource group and it will save you a lot of time and effort.

Categories
General

How to Upload Documents using ConnectWise API

For the last couple of months, I have been using the ConnectWise API to integrate it with our custom software solution. It was fairly easy to add new companies, customers, tickets, and opportunities. Recently I was asked to add the ability to add documents. After reading the documentation, I was able to code the solution but it didn’t work as I expected. After many trials and errors, I was able to add documents using the ConnectWise API. In this post, I want to share my c# code to add system documents using ConnectWise API.

Here is the c# code to upload a document.

I hope that someone else can use this code and be able to upload documents. There is still room for improvement in ConnectWise’s system/upload documentation. The C# SDK does not have an upload sample code. I also want to mention that most of the c# code used to upload the document was taken from the internet. See you soon.

Categories
General

Update

Last month I started working as an independent software developer. I was able to find 2 contracts writing .NET code. I was able to find the first job thru a referral. The other job was listed in linkedin. Both of these jobs allow me to do my work remotely. In this post I want to provide a quick update on these 2 contracts.

CRM Integration

In this job, I’m integrating 2 CRM providers ConnectWise and AutoTask. These CRM providers allow you to create companies, contacts, tickets, opportunities, etc. I’m using Visual Studio Team Services as our source code management tool. We’re using web forms for this project. On the JavaScript side, we’re using jQuery, Knockout, Bootstrap.

SMS marketing platform

In this job, we’re creating a new sms marketing platform. We’re using Twillio, Docker, AWS, .NET Core 2, ServiceStack, and Git. One of the challenge I have faced in this job is to learn these technologies since my knowledge was limited in these areas. Let me give you an example. Every time we commit code, our code base is built using Travis. After code is packaged, it gets deployed to Docker. Since I’m new to Docker, I had no idea how to debug code in a Docker container. The strange thing was that my code worked locally but the same code was not working on QA. However, after asking other developers, we came to the conclusion that the issue was not the code. It was an issue with our deployment. During a git merge, a line of code was removed that affected our deployment logic.

That’s it for now. See you next month.

Categories
General

Using AWS Python SDK to Read Simple Queue Service Messages

In this post, I want to share how to use AWS Simple Queue Service to manage email bounces and complaints. In my previous post, I wrote about sending emails using AWS Python SDK. First, we have to setup 2 SNS topics to handle email bounces and complaints. Go to the Simple Notification Service console and add a topic. After creating a SNS topic, you should see something similar to:

Pay attention to the protocol column above. The protocol for my email-bounce is set to SQS. This is very important since we want to process SQS messages every time SES receives a bounce or complaint. The endpoint is set to a SQS queue.

Now, let’s go back to the SES console and add a configuration set. Here is my SNS destination named email-bounces which is link to the email-bounce topic we created before.

During the sns setup, we also have to create a sqs queue to hold our messages.

Since SNS will be responsible to act as a proxy, we have to setup permissions for this queue to receive SNS messages.

After completing these steps, we are ready to read messages using Python.

I created a very simple Python script. At the top, I’m importing boto3 and my constructor is creating a sqsClient. To read messages, I created a method that takes an url as an argument. In line 24, you can see that the queueUrl is the location of our sqs email bounce. To actually read the message, we call receive_message and pass the sqs url and also MaxNumberOfMessages. In this case, I’m just using 1 to keep things simple. And finally we return the message.

I also have another method called GetEmailAddress that takes in a message as an argument. First, I retrieve the body string and start parsing it to find the email address. At the end of this method, I return the email address.

With this email address, I can clean up my records so I don’t send emails to it. That’s it for now. Next week, I’ll continue using Python scripts to interact with DynamoDB.

Categories
General

Sending emails with AWS Python SDK


Houston we have a problem. Not Houston. I’m the one with the problem. I need to send emails to my list of users but I know that some of these users were spam users. I need a way to send emails and also remove invalid emails from my system.

In the past I have used AWS .NET SDK to send emails and also track bounces and complaints. But I want to challenge myself so I want to play with the Python SDK. In this post, I want to share how to send emails using the AWS Python SDK.

Install the python SDK by following these instructions. Make sure to setup a profile that has permissions to send emails. In this example, I’m using a profile name “python-scripts” that has permissions to send ses emails. Make sure you set this name based on your setup.

After installing boto3, we are ready to create a class responsible to send emails. Take a look at EmailController.py below:


Now let’s create a class that uses EmailController.py. Take a look at EmailRunner.py.

I like to keep things simple. I have a csv file with all the email addresses. Each line in the csv file has an email address. Next, my class reads each line and sends an email. If there are any errors while sending the emails, I’ll use another csv file to keep track of these errors.

If we were able to send emails successfully, we will receive a valid response with status of 200.

That’s all for today. In a future post, I’m going to show you how to setup sns notifications for bounces and complaints. Have a nice day!