The DevOps Phenomenon -- Abstract

The DevOps Phenomenon Continuous Integration and Security  in the Internet Age by Kevin Eberman Abstract This book is about De...

Showing posts with label devops. Show all posts
Showing posts with label devops. Show all posts

Monday, April 11, 2016

DevOps dead? Not so fast.


Andrey Akselrod over at TechCrunch wrote "Managed services killed DevOps." He may think he's covering new ground proclaiming the death of DevOps in his article,  but he's not.  This story has been around for quite a while; tt's almost as old as the word DevOps itself.  Which altogether is a pretty short amount of time.  Here's Mike Gualtieri from five years ago with his take on the death of DevOps: "I don't Want DevOps. I want NoOps.

The panacea of "Full Stack Developers" will not meet the needs. There are many large and complicated systems, that need a lot of attention to keep them running. And these systems are growing (think Internet of Things).   While the bar has been raised for when dedicated Ops is required, it has not gone away, and won't be going away anytime soon

Thursday, April 7, 2016

TOP 8 || BEST Linux Command Line Tools

Here they are... my eight favorite linux command line tools. 

#8) systat

All systems should have systat installed.  Systat is the easiest way to get basic server performance logging running on a system.  Systat includes the utility sar.  Sar tracks system utilization over time.  There are few things more frustrating then trying to determine what happened over night on your server, and you don’t have any data on the performance of the box because sar is not installed.

#7) rsync

Moving large groups of files around is made a lot easier with rsync.  Rsync computes a unique hash of each file you would like to copy and compares it to your target.  If rsync finds a matched hash, the source and destination files are the same, and file copying is skipped.  Skipping files that have already been copied can really cut down the work effort! Running an rsync in cron is an easy way to keep a backup or replica of data files up to date.

#6) telnet

Telnet is a granddaddy of network tools. Telnet and its server side daemon, telnetd, for many years provided remote console connectivity to servers and network gear.  As it passes traffic in plain text including login information, it has rightly been relegated to the landfill for antiquated technologies.  I haven’t connected to a telnet server in at least a decade. However, the client program is still useful.  Telnet to a well know port enables an interactive session over TCP to many common network programs that pass data in plain text, like HTTP, dns, smtp and even SSH.  SSH doesn’t pass interactive data, but most SSHd servers will tell you the version of SSHd you have connected to.  It’s good for simple tests of network connectivity or for passing protocol commands over the command line.

#5) emacs 

I am a vi user myself, but I admire the heck out of emacs. I recall vividly, back in my WebLogic days in the late 90s, setting up a new developer, Anno Langen. Bob Pasker was walking Anno through the environment. What has stuck in my mind was there 30-minute conversation filled with backslapping and high fives as they compared their emacs macros. For all you emcas people out there, I salute you.

#4) lynx

It may come as a surprise that I have included a text base web browser in a list of must have network tools. But having an actual browser in your console is handy. There are times when it is easier to fire up lynx to smoke test the content of a web page without having to exit the console. Command line tools like wget or curl, don’t always do the job. Sometimes you want an interactive session, not just a single get or post.

#3) mtr

Back in the day, traceroute was a common tool for tracing the path of packets from a source to a destination. It relied on ICMP responses to generate its maps. Because these same useful ICMP responses have been exploited to generate denial of service attacks they have been largely turned off on the Internet. Instead of getting a nice list of hosts between you and your target with useful timing data indicating where in the network there are bottlenecks, you get back a lot of no-replies. MTR to the rescue!  Somewhat similar in look and feel to traceroute, MTR relies on UDP responses to generate its network maps. Once again, we can see where the network bottlenecks are.

#2) nmap

NMAP is a very useful tool for scanning networks and ports. It’s a useful way to uncover information about what system is on the other end. If an nmap reveals port 3369 is open, chances are pretty good you are looking at a windows server. Nmap is also a quick and easy way to scan a range of IP addresses to see what’s on your network and how many IP addresses are in use.

#1) nc

nc may be my favorite of any network tool. What makes nc particularly awesome is its ability to open an ad hoc TCP or UDP port on any available port. This is a great way to test network connectivity between systems and networks during the network build phase, before applications are installed. 

So what do you think? Is there a favorite of yours that is not on the list? Please let me know in the comments.


Saturday, February 27, 2016

The DevOps Phenomenon -- Abstract


The DevOps Phenomenon
Continuous Integration and Security 
in the Internet Age
by Kevin Eberman
Abstract
This book is about DevOps. DevOps integrates previously misaligned concerns: Development and Operations. Development teams are driven to continually add new features and functionality to the application. These changes cause instability, which imperils the prime directive of Operations teams—keeping the applications running. DevOps is the convergence between Development and Operations, making the Internet, how it is developed, and how it operates, more efficient, effective, and secure. Amazing convergences are emerging between science, business, culture, and politics; DevOps is one of them. “Talking about music is like dancing about architecture” will no longer be a hallmark of inane comparisons, but a harbinger of new ways of seeing and doing.
The Internet has been the engine of my professional career. I have 20 years of experience in San Francisco and Cambridge at software companies that have helped make the Internet what it is. This book, my story, my DevOps trip, is a microcosm of the Internet during this epoch of the Information Revolution.
“When you come to a fork in the road, take it!”
Audience
Readers of WiredQuartz and InfoWorld.
Ops people, full-stack developers,
software executives, and product managers.
Me.


Comparisons
DevOps for Developers Michael Hitermann. Apress, 2012
The Phoenix Project Gene Kim, George Spafford, Kevin Behr. IT Revolution Press, 2013
Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation Jez Humble. Addison-Wesley Signature Series, 2010
The Painted Word Thomas Wolfe. Picador, 1975


Kevin Eberman 
twitter: @Manager_of_it 
http://manager-of-it.blogspot.com 

Monday, February 13, 2012

Different takes on why monitoring sucks and what's to be done about it #in


Why monitoring sucks — for now

http://gigaom.com/2012/02/12/why-monitoring-sucks-for-now/


A new (old) model
I’d suggest that any well-designed monitoring tool can help automate the OODA loop for operations teams.
1. Deep integration
2. Contextual alerting and pattern recognition
3. Timeliness
4. High resolution
5. Dynamic configuration
What’s next for monitoring?


Why Alerts Suck and Monitoring Solutions need to become Smarter

http://www.appdynamics.com/blog/2012/01/23/why-alerts-suck-and-monitoring-solutions-need-to-become-smarter/

#1 Problem Identification – Do I have a problem?
#2 Problem Isolation – Where is my problem?
#3 Problem Resolution – How do I fix my problem?



My ideal monitoring system
http://forecastcloudy.net/2012/01/12/my-ideal-monitoring-system/



  • Hosted (CloudKick, ServerDensity, CloudWatch, RevelCloud and others) vs Installed (Nagios, Munin, Ganglia, Cacti)
  • Hosted solutions pricing plans use varied parameters such as price/server, price/metric, retention policy, # of metrics tracked, realtime-ness, etc.
  • Poll based method – where collecting server polls the other servers/service vs. Push – where you have a client on the server that pushes locally collected data to logging/monitoring server
  • Allowing custom metrics – not all systems allows monitoring, plotting, sending and alert on custom data (at least not in a easy manner)

Friday, February 3, 2012

puppet day #2 -- and I need a custom fact

Objective
One of the first things I wanted to accomplish with puppet is to track down rogue cron jobs under accounts of people that are no longer here.  The broader objective is to delete old/un-used accounts.

Problem
But there was some evidence that a few of these old accounts still had cron jobs running.  So, we couldn't just delete the old accounts, but needed to proceed cautiously to insure we didn't stomp on some cron job that was actually needed!

I was looking for puppet to tell me which systems had cron jobs under this old account.  Now, puppet is a declarative language, so something like:

if /var/spool/cron/userfoo exists, notify me, so I can take a look and see what I need to fix/replace
doesn't exist!  In puppet, you have to declare whether something should or should not exist and then puppet will take the corresponding action.  I just wanted puppet to tell me about something on my system.  I didn't want puppet to take an action!

Solution
It's up to the puppetlabs provided facter to help out here.  Puppet ships with a bundle called facter that collects a lot of bits of information about systems, like their OS, RAM, kernel version, etc.  The code to gather these facts is written in ruby and is extensible.  I needed a custom fact that would indicate whether or not /var/spool/cron/userfoo or (on solaris) /var/spool/cron/crontabs/userfoo exists.  Writing that code is actually straight forward (my first ruby code ever! yay!).  Getting that code onto my agents had an obstacle to overcome.

Problem #2
Puppet does not deliver custom facts to agents by default.  Agents and the puppetmaster need this set in /etc/puppet.conf
    pluginsync = true
This required using puppet to update the puppet.conf and restart puppet.  That's what I built.  Getting puppet to allow delivery of custom facts by default is a listed feature request: http://projects.puppetlabs.com/issues/5454


The only gotcha here is to make sure you include:
     hasrestart => true,
in your init.pp for the puppet service.  Otherwise puppet will send a stop, but not a start since it can't send a start since it is no longer running!

Resources
http://conshell.net/wiki/index.php/Puppet
grabbed this:
kill -USR1 `cat /var/run/puppet/puppetd.pid`; tail -f /var/log/syslog 
from the above link.  Which I shortened to: 
kill -USR1 `pgrep puppet`; tail -f /var/log/syslog 

Config details after the jump

Puppet installed -- let's do something!

Having got a critical mass (but not all) of my servers running puppet and talking to the puppetmaster, I was ready to start actually doing something with puppet.  So, the first thing I wanted to do, was update the motd on the servers.  I appreciate a standard look and feel when logging into a server and being provided with some useful info about the host I'm on.  Moreover, I wanted to communicate to system users that

I found this: https://github.com/aussielunix/puppet-motd, which uses a puppet template to collect a number of facts along with a really big ASCII banner, that I quite like.
                              _   

 _ __  _   _ _ __  _ __   ___| |_ 
| '_ \| | | | '_ \| '_ \ / _ \ __|
| |_) | |_| | |_) | |_) |  __/ |_ 
| .__/ \__,_| .__/| .__/ \___|\__|
|_|         |_|   |_|             
                                            _   _ 
 _ __ ___   __ _ _ __   __ _  __ _  ___  __| | | |
| '_ ` _ \ / _` | '_ \ / _` |/ _` |/ _ \/ _` | | |
| | | | | | (_| | | | | (_| | (_| |  __/ (_| | |_|
|_| |_| |_|\__,_|_| |_|\__,_|\__, |\___|\__,_| (_)
                             |___/                


Any files that have a 'Puppet' header need to be changed in puppet. 

Interesting tidbit
In my motd.erb template, I included:
Uptime:    <%= uptime %>
What happens with this, is that the "uptime" fact (and the other facts included in the template) gets evaluated on the client on every puppet run and a flat file without the puppet mock-up is laid down on the file system.  This file gets compared and reevaluated on every run.  Here's the point: every day the uptime changes and a new file is laid down in /etc/motd and the old file is backed up.  This is clearly pretty inefficient, and needs to be replaced with a function that will process/update the uptime on login, and not on every puppet run.

Resources
https://github.com/aussielunix/puppet-motd
My init.pp and motd.erb are in the comments I just discovered that I can't format in comments, so adding the file specs after the jump...

Wednesday, February 1, 2012

managing users with puppet

useful resource:
http://itand.me/using-puppet-to-manage-users-passwords-and-ss

Friday, January 27, 2012

Bumps along the way of deploying puppet

In my new environment we have about 100 servers of various flavors... predominately CentOS and Solaris with several RedHat servers and a couple of Windows and Debian boxes. The configurations, versions and patch releases are all over the place. Some of these boxes are quite old (cough) Fedora 5 (cough) (cough) solaris 9 (cough).

My first goal is simply to get puppet onto all of these servers. Of the ~100 servers I need to manage, about 30 of them are dev/qa/test boxes. I now have puppet installed on all of them. There were a few bumps along the way.

Impediments

1. The right repository--I'm sure for the yum guru's out there, this will seem trivial, but it was a problem for me. A repository I was initially using had an older version of puppet (which I did not realize immediately). It wasn't until one of the boxes I was installing puppet on already had a repository configured with a new version of puppet did I realize I had a problem. And it wasn't until I tried connecting it to the puppet-server that I realized I had a problem because I got this somewhat unhelpful error: Error 400 on SERVER: No support for http method POST

Thanks to http://bitcube.co.uk/content/puppet-errors-explained for the explanation.

So, I updated the puppet-master and I fixed the repository I was using and now I'm getting the latest and greatest.

2. Yum dependencies--Occasionally I ran into dependency issues when running yum install. It wasn't terribly clear to me why I got these errors, but generally, it happened when there was a longer list of dependencies. I was able to work around this typically, by simply doing a yum install of one of the dependent packages first, and then trying the yum install puppet again and it worked.

3. Old OSes without the required packages--In some cases I could not work around the the dependencies because the OS version was so old--Fedora 8, 7 and 5. These OSes were looking for libselinux-util which wasn't made available until Fedora 10! Note to self: put these systems on the top of the list to retire.

4. puppetmaster directory details: Also worth mentioning, it took me some time to sort out which directories and where they need to be located on the puppetmaster. I'm not sure if this is a poor documentation problem, or a user problem, but it took some trial and error to get it right.

I needed to have:

/etc/puppet/manifests/site.pp
/etc/puppet/modules

and as an example under /etc/puppet/modules I needed:

/etc/puppet/modules/sudo/manifests/init.pp


Resources


  1. AWESOME very helpful and engaged channel: puppet IRC. IRC server irc.freenode.net, room: #puppet
  2. List of common puppet errors with pointers to fix: http://bitcube.co.uk/content/puppet-errors-explained
  3. Of course the puppet docs, particularly for installing puppet on solaris: http://projects.puppetlabs.com/projects/1/wiki/Puppet_Solaris
  4. RPM search: http://rpm.pbone.net/
(updated to clean-up layout, edit fonts, etc)

Implementing DevOps

I've started at a new company. They are a very large company with a medium sized web presence operating several on-line brands for a niche audience.

Generally they are a functioning company with a well established environment that is running well enough, but is ready for an overhaul. They have a mid-to-long term project to consolidate different content management systems into a unified content management system that allows for sharing of content between brands. This larger project provides an opening to perform a major face lift on the internal operations. WooHoo!

Currently releases are largely un-automated, time consuming, take place during off-hours, require quite a few people on-line to do the actual work and testing or just to be on-hand in the event something blows up. There seems to be plenty of room to implement a DevOps methodology for releases, particularly automation and measurement.