DevOps Archives - Anuj Varma, Hands-On Technology Architect, Clean Air Activist https://www.anujvarma.com/category/cloud-computing/devops/ Production Grade Technical Solutions | Data Encryption and Public Cloud Expert Thu, 20 Oct 2022 13:52:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://www.anujvarma.com/wp-content/uploads/anujtech.png DevOps Archives - Anuj Varma, Hands-On Technology Architect, Clean Air Activist https://www.anujvarma.com/category/cloud-computing/devops/ 32 32 Using git desktop with VS Code https://www.anujvarma.com/using-git-desktop-with-vs-code/ https://www.anujvarma.com/using-git-desktop-with-vs-code/#respond Thu, 20 Oct 2022 13:52:58 +0000 https://www.anujvarma.com/?p=9112 Git Desktop with Visual Studio Code Step 1. In git desktop, open the url to the git repo. You will need to provide a local folder for git to work. […]

The post Using git desktop with VS Code appeared first on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.

]]>
Git Desktop with Visual Studio Code

Step 1. In git desktop, open the url to the git repo. You will need to provide a local folder for git to work.

Step 2. In VS Code, from the file menu, open that local folder.

That’s all that should be needed. Of course, you will need to be signed into github with your credentials

This was meant to be a quick start to integrating Git with Visual Studio Code.

 

.

The post Using git desktop with VS Code appeared first on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.

]]>
https://www.anujvarma.com/using-git-desktop-with-vs-code/feed/ 0
Once you’ve created a new repo in github https://www.anujvarma.com/once-youve-created-a-new-repo-in-github/ https://www.anujvarma.com/once-youve-created-a-new-repo-in-github/#respond Tue, 06 Oct 2020 00:58:58 +0000 https://www.anujvarma.com/?p=7854 Say you have repo in github called testrepo Browse to a local directory – and clone the repo in there (by typing git clone https://…testrepo full url Now, you have […]

The post Once you’ve created a new repo in github appeared first on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.

]]>
  • Say you have repo in github called testrepo
  • Browse to a local directory – and clone the repo in there (by typing git clone https://…testrepo full url
  • Now, you have an empty local repo – but it’s not traced by your local git. To do this, type:
  • git init
    git add README.md (or whatever file you may already have in the local folder)
    git commit -m "test commit"
    git remote add origin https://github.com/yourgithub/testrepo.git
    git push -u origin main
    
    NOTE: On the commit, if you do not provide a comment, you will be prompted (vi) to enter one. type any text 'test commit' and esc wq! to save and exit vi.


  • Need an experienced AWS/GCP/Azure Professional to help out with your Public Cloud Strategy? Set up a time with Anuj Varma.

    The post Once you’ve created a new repo in github appeared first on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.

    ]]>
    https://www.anujvarma.com/once-youve-created-a-new-repo-in-github/feed/ 0
    Terraform Basics and Helpful Commands https://www.anujvarma.com/terraform-basics-and-helpful-commands/ https://www.anujvarma.com/terraform-basics-and-helpful-commands/#respond Sat, 04 Jul 2020 02:13:23 +0000 https://www.anujvarma.com/?p=6529 Overview These are just some quick recap notes and troubleshooting steps. There’s much more to terraform, but this is a quick basics overview, getting started guide and a short troubleshooting […]

    The post Terraform Basics and Helpful Commands appeared first on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.

    ]]>
    Overview

    These are just some quick recap notes and troubleshooting steps. There’s much more to terraform, but this is a quick basics overview, getting started guide and a short troubleshooting guide.

    How does Terraform translate it’s resource definitions to the target cloud?

    Terraform works via a ‘provider’ resource for each cloud platform. The provider translates the resource definitions in the .tf file – to API calls (e.g. AWS API calls for the AWS provider).  There are cases where certain features are only available in AWS CLI – and not via API (such as enabling MFA on s3 buckets). In these rare cases, the default provider for aws will not work.
    The provider feature is what makes terraform both cloud agnostic and super flexible.
    For e.g. if you don’t like AWS Cloudfront and want to use Cloudflare as your cdn on aws – you simply plug in a cloudflare terraform provider.   Same for if you do not like route 53 and want to use another dns service. Here is a list of all the terraform providers (they keep adding more)

    Terraform planning and applying

    After applying, Terraform maintains a snapshot of the state of resources it provisioned. This is stored in a .tfstate file and is used as the baseline for future plan and apply executions.

     After a while of using Terraform, I realized, if I ever wanted to know if something had unintentionally changed in our infrastructure, I just needed to run plan and see if Terraform intended to do anything.

    If anything was changed intentionally, then it would have been in the source code and Terraform would not plan to do anything. However, if anyone changed any part of our AWS infrastructure manually, Terraform’s plan would identify it and let us know.

    In other words, if our AWS or GCP infrastructure drifted from its expected state, then Terraform’s plan would detect it.

    Local Setup with credentials (AWS , GCP Providers)

    Google Provider (Download the json formatted key for your service account from Google’s Console. That’s what account.json refers to)

    provider "google" {
      credentials = file("account.json")
      project     = "my-project-id"
      region      = "us-central1"
    }
    
    AWS Provider
    provider "aws" {
      version = "~> 2.0"
      region  = "us-east-1"
    }

    Validating, Planning and Applying

    terraform validate
    
    terraform plan
    
    terraform apply

    Formatting Code

    terraform fmt command is used to rewrite Terraform configuration files to a canonical format and style.

    Targeting Specific Resources instead of ALL Resources in your TF module

    terraform apply -target=google_storage_bucket.my_storage_bucket

    Troubleshooting Debugging

    Terraform apply can fail without giving a meaningful reason. To see the underlying reason, you have to enable TF_LOG = “debug”. On a windows machine, fire up a powershell prompt to do do. On a Linux box, a bash prompt will do the same.

    From a powershell prompt, type 
    $env:TF_LOG="config"
    
    Sometimes, your debug trace will show 'resource tainted'. To untaint tainted resources 
    
    terraform untaint resourcename

    String  Concat

    format("%s/%s",var.string,"string2")

    Summary

    These are just some quick recap notes and troubleshooting steps. There’s much more to terraform, but this is a quick basics overview, getting started guide and a short troubleshooting guide.


    Need an experienced AWS/GCP/Azure Professional to help out with your Public Cloud Strategy? Set up a time with Anuj Varma.

    The post Terraform Basics and Helpful Commands appeared first on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.

    ]]>
    https://www.anujvarma.com/terraform-basics-and-helpful-commands/feed/ 0
    How do you validate input variables in terraform? https://www.anujvarma.com/how-do-you-validate-input-variables-in-terraform/ https://www.anujvarma.com/how-do-you-validate-input-variables-in-terraform/#respond Tue, 25 Feb 2020 21:24:46 +0000 https://www.anujvarma.com/?p=6698 How do you validate input variables in terraform? This is an experimental feature, which means you have to specify the following inside your variables.tf (or wherever your variables are defined): […]

    The post How do you validate input variables in terraform? appeared first on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.

    ]]>
    How do you validate input variables in terraform?

    This is an experimental feature, which means you have to specify the following inside your variables.tf (or wherever your variables are defined):

    terraform {
      experiments = [variable_validation]
    }

    Simply use a validation block and use whatever condition (can be a simple string contains or a more complicated regex).

    Example – validate that my domain name starts with www.

    variable "mydomainname" {
    
    validation {
      condition = length(regexall("^www", mydomainname)) > 0
      error_message = "Should start with www"
      }
    }

    To use regex – you actually need to use regexall – this is what returns the COUNT of how many matches the regex found (regex, by itself, returns only the matching characters). For a full list of regex patterns supported by terraform

    Current Limitations (Only single variable validation)

    Unfortunately, in it’s current experimental version, terraform does not support passing in a variable into the ‘condition’ statement. The condition HAS to take in the input variable name exactly (i.e. – it cannot accept an each.value).

    This code WILL NOT work

    variable "mytestdomainnames" {
    
    listnames = split(",",var.mytestdomainnames)
    
    for_each = var.listnames
    
    validation {
    condition = length(regexall("^www", each.value)) > 0
    error_message = "Should start with www"
      }
    }

    If you cannot use the validation block

    Here is a workaround that I used prior to the introduction of the validation block in terraform. A null resource which prints an error if something doesn’t evaluate to true.

    variable "mydomainname" {
    
    }
    
    resource "null_resource" "nullres" {
    
    testval = "${length(regexall("^www", var.mydomainname)) > 0 ? 0 : 1}"
    "ERROR: Must start with www" = true
    }

    Summary

    The validation block in terraform is a necessary new feature. When combined with a regex or regexall, it can pretty much validate any kind of input pattern (see this list of full regex patterns).
    Unfortunately, while it is great for single variable validation, it does not support any kind of looping or multi valued validation.

    The post How do you validate input variables in terraform? appeared first on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.

    ]]>
    https://www.anujvarma.com/how-do-you-validate-input-variables-in-terraform/feed/ 0
    Quickstart Terraform for AWS https://www.anujvarma.com/quickstart-terraform/ https://www.anujvarma.com/quickstart-terraform/#respond Wed, 16 Oct 2019 17:41:54 +0000 https://www.anujvarma.com/?p=6266 First, install terraform and set the PATH variable Create a folder which will contain your .tf files. cd to that folder (from a cmd prompt) From the same command prompt, […]

    The post Quickstart Terraform for AWS appeared first on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.

    ]]>
  • First, install terraform and set the PATH variable
  • Create a folder which will contain your .tf files. cd to that folder (from a cmd prompt)
  • From the same command prompt, type Terraform init – downloads all the libraries for the providers (including AWS)
  • Specify a provider in your .tf file – as shown below (Use VS Code – recommended)
  • Type terraform validate in the folder containing the tf file
  • Type terraform apply in the folder containing the tf file
  • The ‘assume role’ section specifies WHO is allowed to sts:AssumeRole. The role definition will fail without having at least one sts:AssumeRole principal.
  • Sample – create a security auditor role – and attach the managed policy – SecurityAudit to it. Also add, AWSSecurityHubFullAccess to the same role

    provider "aws" {
    
      version = "~> 2.0"
    
      region  = "us-east-1"
    
    }
    
    resource "aws_iam_role" "role" {
    
      name = "security_auditor_role"
    
    assume_role_policy = <<EOF
    
    {
    
     "Version": "2012-10-17",
    
     "Statement": [
    
       {
    
         "Action": "sts:AssumeRole",
    
         "Principal":{"AWS":"arn:aws:iam::MY_ACCT_NUMBER:root"},
    
         "Effect": "Allow",
    
         "Sid": ""
    
       }
    
     ]
    
    }
    
    EOF
    
      tags = {
    
        resourcetype = "production_role"
    
      }
    
    }
    
    resource "aws_iam_policy_attachment" "policies-attach1" {
    
      name       = "security-policies-attachment"
    
      roles      = ["security_auditor_role"]
    
      policy_arn = "arn:aws:iam::aws:policy/SecurityAudit" 
    
    }
    
    resource "aws_iam_policy_attachment" "policies-attach2" {
    
      name       = "security-policies-attachment2"
    
      roles     = ["security_auditor_role"]
    
      policy_arn = "arn:aws:iam::aws:policy/AWSSecurityHubFullAccess" 
    
    }

    The post Quickstart Terraform for AWS appeared first on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.

    ]]>
    https://www.anujvarma.com/quickstart-terraform/feed/ 0
    Dynamic Infrastructure, Disaster Recovery and Netflix’s Simian Army https://www.anujvarma.com/dynamic-infrastructure-disaster-recovery-and-netflixs-simian-army/ https://www.anujvarma.com/dynamic-infrastructure-disaster-recovery-and-netflixs-simian-army/#respond Wed, 17 May 2017 22:46:15 +0000 http://www.anujvarma.com/?p=4712 Disaster Recovery has never been a fun thing to plan – and even less of a fun thing to test successfully.   Especially around those mission critical apps that really need […]

    The post Dynamic Infrastructure, Disaster Recovery and Netflix’s Simian Army appeared first on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.

    ]]>
    Disaster Recovery has never been a fun thing to plan – and even less of a fun thing to test successfully.   Especially around those mission critical apps that really need this level of D.R. Planning and Testing.

    Traditionally, cold standby servers, often entire standby environments need to be provisioned, brought up and kept running.  Not only are these environments super-expensive to create, but also ridiculously time consuming to maintain (consider keeping each tier of the standby environment in sync with each tier of the live production environment).

    Cloud Solution versus Traditional Solution – for Common Scenarios

    • Have you ever worried about a production server (or servers) in your farm crashing ?
    • Have you ever had potentially breaking code changes checked in by developers – that haven’t been completely tested prior to production push?
    • Have you ever dealt with  increasing response times from your web application – in the face of increasing user sessions? Often, causing your application to crash?
    • Scenario

    Traditional Solution

    Cloud Solution

    A server in your web-(or data) farm crashes….

    Failover to existing nodes that haven’t crashed. However, if multiple nodes crash – or the load balancer crashes, you are still looking at serious downtime.

    A monitoring service detects the crashed node and automatically notifies / triggers a ‘server template’ that spins up a new one. No manual intervention required. The new instance can be configured exactly liked the failed instance – or can be configured to a prior state.

    The development team checks in ‘potentially breaking changes’ that need to be tested out.

    A new TEST environment needs to be provisioned – the new codebase deployed and the full testing life cycle needs to be followed.

    Your CI tool (Continuous Integration) of choice automatically detects the check-in – and spins up an additional TEST VM to deploy the code to. Your CI Automated Testing then runs your suite of tests against the new codebase – and identifies all the ‘breaking’ changes. This saves a lot of manual effort in creating a test environment and running an entire QA effort around the new code.

       Load (Concurrent users) spike up suddenly – causing existing servers to choke, maybe even crash.

    You would need to manually provision more hardware – for either vertical or horizontal scaling  – or both! Time consuming and expensive – and entails downtime!

    Infrastructure’s auto scaling capabilities spins up new instances and dynamically adds them to the pool. No manual intervention required.

    Server Templates (The Magic Of)

    Thanks to ‘server templates’ (e.g. VMWare templates, AWS AMIs, AWS CloudFormation Templates),  spinning up entire VMs with a few lines of code has become a straightforward exercise in the cloud world. More importantly, these VMs can be defined with specific ‘roles’ – a WebServer Role, a DB Role etc. The exact role ‘configuration’ can be stored on a configuration server (CHEF Server, PuppetMaster, ANSIBLE Tower…) – making it immune from any accidental overwrites/destruction.

    The bottom line is that you not only get a blueprint for automatic infrastructure creation – you also get a safe for locking this blueprint so that no one can destroy it. Template Repositories, Template Versioning, hardening of repositories – these are all evolving at a rapid pace, making the cloud-center solutions as as secure as traditional data center solutions.

    What does all this have to do with Disaster Recovery?

      As you probably guessed from the recap above, in the cloud world, keeping COLD STANDBYs just doesn’t make much sense.  When hardware fails, it is relatively painless to re-generate an identical copy of the crashed server.

    The devil is in the details, of course, and one has to be mindful of how to recover any data, log files etc. on the crashed server.  For e.g. – all the performance metrics (CPU usage, average memory usage etc. are all lost with the server crash). There are, fortunately, cloud patterns that help with centralized logging, data updates, performance metrics and other commonly needed server stats.

    Here’s the rub…Cold Standby Servers can be replaced with on-demand, re-buildable instances. The instances do not need to be on standby – all that is needed are (well-tested) server templates that are easily accessible in case of a disaster situation. These server templates can recreate the crashed instances – in a way that retrieves all of the configuration data  that was part of the crashed instance.

    Netflix’s Simian Army – Testing D.R. in the real world

    Say – your team has designed the perfect D.R. Strategy.  Just how far do you test it?  Do you take down just the data tier? Do you bring down your entire environment to simulate a real-world disaster situation?

    Disasters are random events – and, in order to simulate true disasters, one needs to RANDOMLY bring down (pieces of) a production environment.

    Netflix does just that. With an automated tool (named Chaos Monkey), Netflix randomly seeks out instances to destroy. No one is given a ‘heads up’ that Chaos Monkey is about to run; it just runs and wreaks havoc along the way. If you have a truly resilient environment, the monkey’s attempts are essentially negated by new instances spinning up to replace the destroyed ones.   Otherwise, the monkey exposes any weaknesses in the infrastructure.

    If you think destroying single instances is extreme, how about destroying an entire data center? Netflix’s Chaos Gorilla  does just that. It removes an entire Availability Zone and tests for repercussions.  Pretty gutsy if you think about it.

    There are a few more monkeys up Netflix’ sleeves. They even have a ‘latency monkey’ that randomly introduces artificial delays into the servers serving content to see if upstream servers can handle the ‘throttling’ effectively.

    Summary

    Traditionally, D.R. meant ‘cold standby’ environments entailing huge costs along with maintenance headaches.  Traditionally, TESTING D.R. scenarios was also a challenge – as it was both expensive and time consuming. More often than not,  what was tested was not TRUE disaster, but rather a scaled down version of a disaster scenario. With the option of building an entire data center in the cloud, one can now leverage all of the auto recovery features and devops improvements in cloud technology. A Server going down (for whatever reason), is no longer a cause for serious concern. Not only can a cloud service detect the crashed server, it can notify the appropriate server creation template to ‘spin up’ an equivalent server. What about all the configuration data etc. on the crashed server? That too, through innovative cloud template patterns, can be recovered from a centralized repository.

    Planning (and Testing) D.R. for your critical, high-performance web apps, no longer needs to be the expensive and risky proposition that it used to be in the past.

    Thoughts? Comments? 

    The post Dynamic Infrastructure, Disaster Recovery and Netflix’s Simian Army appeared first on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.

    ]]>
    https://www.anujvarma.com/dynamic-infrastructure-disaster-recovery-and-netflixs-simian-army/feed/ 0
    Some DevOps Certification Level Questions https://www.anujvarma.com/some-devops-certification-level-questions/ https://www.anujvarma.com/some-devops-certification-level-questions/#respond Fri, 05 May 2017 16:13:00 +0000 http://www.anujvarma.com/?p=4696 1. Suppose you are to deploy 6 servers (EC2 Instances in AWS lingo) to run v1 of an app. Which of the following is a correct Ansible template that accomplishes […]

    The post Some DevOps Certification Level Questions appeared first on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.

    ]]>
    1. Suppose you are to deploy 6 servers (EC2 Instances in AWS lingo) to run v1 of an app. Which of the following is a correct Ansible template that accomplishes this?

    1. - template:
        count:6   src: config.ini.j2
        dest: /share/windows/config.ini
        newline_sequence: '\r\n'
    2. – template:
    count: 6
    image: ami-v1
    instance_type: t2.micro
    3. - template:
        src: /mine/sudoers
        dest: /etc/sudoers
        validate: 'visudo -cf %s' 
        count:6
    4. - template:
        src: etc/ssh/sshd_config.j2
        dest: /etc/ssh/sshd_config
        owner: root
        count: 6    group: root
        mode: '0600'
        validate: /usr/sbin/sshd -t -f %s
        backup: yes
    Answer – 2)

    2. Which of the following is NOT a characteristic of Memcached?

    1. Memcached is free and open source.
    2. Memcahed is a distributed memory object caching system. 
    3. The primary objective of Memcached is to enhance the response time for data that can otherwise be recovered or constructed from some other source or database. 
    4. Memcached provides security options around the key-value pairs that are cached.

    4) is incorrect.

    3. In AWS, Infrastructure is executed as code (IaC). Which of the following is NOT true of AWS IaC : 

    1. The code is written in simple JSON format
    2. The code is organized into files called templates and template groups called stacks. 
    3. The code has to be used in conjunction with the AWS CloudFormation Service.
    4. The code templates can be managed from the regular AWS Admin Console.

    Answer 3) Does not have to use CloudFormation.

    4. Which of the following does not logically belong under DevOps (choose all that apply)?

    1. Infrastructure as code
    2. Continuous deployment
    3. Automation
    4. Database Administration

    Answer 4) – DBA

    5. Application DevOps has a different set of core activities than Infrastructure DevOps. Which of the following activities is common between  Application DevOps and Infrastructure DevOps ?

    1. Provisioning
    2. Configuration
    3. Orchestration
    4. Deployment

    Answer 4) – Only deployment is common

      Full answer –

      App DevOps Components

      • Code building
      • Code coverage
      • Unit testing
      • Packaging
      • Deployment

      Infrastructure DevOps Components

      • Provisioning
      • Configuration
      • Orchestration
      • Deployment

      The post Some DevOps Certification Level Questions appeared first on Anuj Varma, Hands-On Technology Architect, Clean Air Activist.

      ]]>
      https://www.anujvarma.com/some-devops-certification-level-questions/feed/ 0