The DevOps Phenomenon -- Abstract

The DevOps Phenomenon Continuous Integration and Security  in the Internet Age by Kevin Eberman Abstract This book is about De...

Thursday, October 21, 2021

The IT Manager’s Declaration for IAM

 The IT Manager’s Declaration for IAM

The IT Manager is being squeezed. 


Not so long ago, any business trying to get a competitive edge with technology could not do without dedicated help, whether it was contracted, or an on-staff IT Manager.  But, IT was often poorly understood, and often badly managed--and stereotypically derided as aloof, incomprehensible, and yes, incompotent. And for decades, as the cost of technology has fallen, IT Managers have been asked to do more with less. Now, DevOps and other trends in automation are really putting the squeeze on the IT Manager. 


These days businesses can get by without dedicated IT Management. But advances in DevOps, the ubiquity of SaaS offerings and ever increasing advances in automation have not eliminated the need for IT Management. In the absence of dedicated resources, IT Management (including IAM) is a second job for someone in the company: the CFO, the HR Manager, a Principal Engineer, or that entrepreneur building their startup. They have insufficient time and resources to be an “expert” in IAM. Lacking the time and expertise, they struggle to build a safe and manageable IAM system for their business. They struggle to control who has access to their information--they struggle to keep their arms around their own intellectual property. 


Identity Access Management (IAM) is such a part of our everyday lives, we seldom give it much thought. It is just a part of the wires, fiber optic cable, and software that are the backbone of the Internet. It is like the roads we drive--there all the time, but generally experienced unconsciously. And like those roads, even with safety measures like traffic lanes, street signs and signals, it can be dangerous.

 

We know not all drivers obey the rules of the road. Generally, drivers do, but sometimes when they are in a hurry, they run a red light. Sometimes drivers are on unfamiliar roads with unfamiliar rules. And sometimes drivers just make mistakes; they veer left, when they should have veered right. We all make mistakes. For all of these reasons we witness daily pile-ups on our roads.

 

Like roads without traffic lanes and street signs, the Internet without IAM would be chaos.


Even with IAM the Internet is dangerous! Shortcuts, lack of concern and mistakes lead to some nasty security breaches, not to mention the routine aggravation when users have difficulty getting access to their systems and data.

 

Learning and implementing IAM is doable, but hard. Tools for IAM are available for larger businesses. When those tools fall short, these businesses can compensate with dedicated technical resources to fill gaps. For your everyday IT Manager, IAM is chock full of technical jargon, complicated certificate management, careful copying and pasting incomprehensible access URLs, and faulty integrations that often fall short of the promise of SSO, and require some manual user provisioning on either the identity provider and service provider side or both

.

THEREFORE, WE, THE IT MANAGERS, DECLARE:

 

We want a standard IAM model that is reliable, easy to use and secure. The standard IAM model will be:

 

·   STANDARD

If you’re thinking this declaration is unnecessary because we have SAML or some other existing service or protocol, you’ve missed the point. If so, consider electronic payments for a comparison. You don’t need to understand how the Automated Clearing House (ACH) system works to make an electronic payment. You don’t even need to know there is an ACH system. All you need to do is tell your bank where you want your money to go. Your bank does the work of getting your payment there. In the same manner, we want a standard IAM model that just works.

·   UBIQUITOUS

A standard IAM model will only be a “standard” when it is implemented widely. Wide adoption of the standard IAM model amongst the applications services we typically use or an ability to overlay the standard IAM model over existing IAM systems is necessary to achieve its benefits.

·   EASY TO IMPLEMENT along the standard IAM model:

o   Adding applications, services and user assignments is as easy as select, commit and go. 

o   When integrating identity and service providers systems will provide. comprehensible error messages when configurations do not work.

o   User management (provisioning and deprovisioning) will incorporate role, individual and group provisioning over all systems. Today’s integration gaps need to go away.

·   DEVOID OF JARGON

Please, no tech-speak. If you want an IT Manager to, “to access an API endpoint, to download a metafile and inspect a system’s SAML attributes,” you’ve fallen off the path. We want a standard IAM model that incorporates relatable metaphors, in the same way as our computers have “desktops” and “files.” 

·   UNENCUMBERED by complicated certificate or encryption key management

Certificate management and encryption keys are complex and confusing even for seasoned IT people. Any system that explicitly requires the generation of certificates or encryption keys is fundamentally hard to use. The standard IAM model we want obfuscates the heavy lifting of certificate management.

·   EXTENSIBLE

The goal of a small business is to become a big business. When business outgrows the standard IAM model, stepping up to an “advanced IAM model” should be an evolution, not a rip and replace nightmare.

·   AUDITABLE

Any system that is responsible for controlling access to our applications and data needs to be auditable. The standard IAM model we want will maintain an audit log of provisioning activities, configuration changes, and other important events, like failed logins. This audit log needs to be readily accessible (and read-only) to the IT Manager.

·   SECURE

Let’s not forget why we are doing all of this. We want to keep our applications and data secure. Identity and access management is meant to improve security by making access to our applications and data easier to manage. Ease of use cannot come at the expense of security. We want a reliable, easy to use and secure standard IAM model.

 

 

 

 

 


Monday, August 30, 2021

LINK: Journal of AHIMA: Don’t Let Accounts Payable Derail Your HIPAA Compliance Efforts

An article I wrote about AP Automation, HIPAA and Information Security was published on the The American Health Information Management Association (AHIMA) online publication, Journal of AHIMA.

Intro: 

Technology continues to disrupt organizations across industries—the way we work and go about daily operations has changed, and the healthcare sector is no stranger to these disruptions. COVID-19 has further accelerated this disruption with the adoption of telehealth and digital health solutions. Healthcare professionals are increasingly needing tools that provide quality care while also ensuring Health Insurance Portability and Accountability Act (HIPAA) compliance and the confidentiality of patient information.

In addition to embracing telehealth technology, healthcare organizations are undergoing a similar wave of digitization in the back office. One top priority is finding an automated accounts payable (AP) solution designed for paperless invoice capture, coding, and approval, along with electronic payment execution. AP automation was developed not only as a way to make it easier for practices to pay their suppliers electronically via ACH, check, virtual card, and FX, but to free AP and finance staff of the burdensome task of processing invoices and payments manually.

More at: 

https://journal.ahima.org/dont-let-accounts-payable-derail-your-hipaa-compliance-efforts/

Thursday, September 10, 2020

Hiring for DevOps -- And HowTo Navigate "DevOps Compliant" Resumes

DevOps has achieved buzz word status. Job descriptions and resumes are chock full of DevOps descriptors, tools and processes. Job descriptions are being built around DevOps and applicants are building their resumes around DevOps. Resume searches with DevOps keywords return thousands of hits. This is a sure sign of the impact DevOps is having. And yet, this shift towards DevOps has flooded the job market with "DevOps compliant" resumes. This adds a challenge when you're trying to find the right people to join your DevOps team. 

And let's not forget, PEOPLE are the most important aspect to successfully implementing DevOps. Getting the right people on-board is a must. 

So, here's what I've been doing--I break up the interview process into two or three stages:

  • Stage 1: Initial phone screen
    • 45 minutes
    • An initial phone screen is an undemanding way in time and effort to determine whether or not a candidate with a resume of interest, is really that person on their resume!
  • Stage 2: In-person tech and team review
    • 3-4 hours
    • The meat of an interview is a face-to-face (zoom calls included) review by the team to drill down on their technical capabilities, and more importantly, what and how they do things. Matching a candidates temperament with the company's culture is vital for success. A cowboy that jumps in and makes changes, may not be the right person in a strictly managed, process oriented environment. But, they might make an ideal candidate for many start-ups. 
  • Stage 3: Final interview (optional) and negotiation 
    • Another round of interviews may be appropriate depending on the size of the organization, and who has an interest. Often senior managers will want to meet all new hires. 

Then, there are four broad categories that I try to assess when interviewing. But, first, I always ask these two questions no matter who I am interviewing:

  • What do you know about the company? (have you looked at our website? )
  • In your own words, how would you describe the position? (did you read the job description)?
This is in part housekeeping. Recruiters and schedulers make mistakes. Or the job description may have changed since the applicant was engaged. But these questions can also reveal an applicants interest in the job. That's important, because I want to work with with people that take the initiative to learn about the company offering the job.  

Okay, so back to the four broad categories I use to assess candidates. They are: 
  1. Ownership
  2. Responsibility
  3. Technology
  4. Process
But, before I explain those categories, I have a few "go to" questions I ask every candidate looking for a technical position. These questions are meant to take a quick and deep dive into basic skills required by successful Ops people. If a candidate has done the work, these are simple questions. But if a candidate struggles with these questions, they likely won't recover during the remainder of the interview. 

Linux: What is load average?
Networking: What is a subnet mask?
Python (scripting): What's the difference between a "list" and a "tuple?"
Security: What is risk?

Ok, back to those four categories, and specific questions I use to get at these categories:
  • Ownership -- Or "work ethic and philosophy." That's somewhat officious, but what I'm trying to get at is, what excites the candidate? And what do they think of the work? Or how do they go about getting the work done? Some questions:
    • What do you consider success?
    • How do you know when you've succeeded? 
    • What is DevOps?
    • Talk about your interests? Do you have tech interests outside of work? Associations, projects? 
  • Responsibility -- Or role. What have you done? How much of it? And who did you do it with?
    • What size teams have you worked with or managed?
      Talk about the environments you worked in:
      • How large have they been?
        • How many environments?
        • What networking is involved?
        • How many systems?
        • How much application data are you storing?
      • What role did you play in setting up the environment? And its upkeep?
      • What's your hand's-on comfort?
      • Are you on the on-call rotation?
  • Technology -- Or skills: depth and breadth. These questions are meant to get a detailed picture of the candidates skills and the skills required for the job: 
    • Server Configuration Management
      • What experience do you have with Server Configuration Management? Docker, Check, Puppet, etc?
      • Describe systems you worked with and your involvement in the design, setup, and their maintenance 
    • Containerization:
      • Are you familiar with Docker or other container technologies? 
      • If so, describe the setups you have worked with and your involvement in the design, setup, and maintenance of those systems. Did your involvement include re-architecting existing VM based deployments? If so, describe.
    • Cloud:
      • What cloud companies have you used? AWS, Google, Azure?
      • Describe the cloud services you've used? For example,  in AWS: EC2, S3, EKS, ECR, etc
  • Process -- How do you do the work?
    • Collaboration? Ticket management? Agile, kanban?
    • Build and deployment: 
      • Describe build and deployment systems you've worked with 
      • Release process and cadence
    • Monitoring:
      • What is your current monitoring setup (Nagios/Sensu, commercial)? 
      • How is it configured? 
      • A specific question like, "how do you determine the threshold for load alerts?" can lead to open ended responses 
    • Backups
      • What is your backup plan?
      • Do you have a committed RTO and RPO?
      • Have you tested your backup plan?
    • Log aggregation? 
Another approach you can weave into the questions above, is asking about real world examples. Real world examples are problems you have encountered. Setup the candidate with what you saw initially, and see where the candidate goes with that. This will give you insight into their experience and problem solving capabilities.  

These categories of interest and questions have served me well. But, one warning: if you are playing "gotcha" with questions and candidates, you've lost your way. Interviews are a dialog. The dialog--the engagement--you have with a candidate is so much more important than the questions. Interviewing is meant to find people you want to work with and that want to work with you. Be considerate. Be prepared to answer your own questions, and help a candidate that's stuck. Interviews are stressful. Sometimes, people need another beat, and then they're great. Remember: you are representing your company. 

Always, always, always treat candidates with respect.

Thursday, July 2, 2020

Installing and configuring Python on MacOS (Catalina)

Installing Python (latest) on MacOS (Catalina)

New computer... and a new opportunity to setup python. This happens about as often as I update my blog. Things change, so here's what installing python on MacOS looks like circa July 2020. 

Set BASH as default SHELL

First of all... I prefer BASH as my SHELL. In Catalina ZSH is the default. Changing the default shell to BASH is a setting in Terminal:
Terminal--> Preferences --> General * Shells open with: Command (complete path): /bin/bash

Reference:

Suppress annoying ZSH warning 

Apple really would prefer I stop using BASH. When opening a Terminal by default a message tells me that ZSH is now the default shell. That's a quickly tiresome reminder that I really should be using ZSH, and why am I using BASH anyway?--BASH has been deprecated. To make matters worse, the version of BASH installed by default on Catalina is quite old, and ZSH is all up to date. But, I don't want to be deterred, and I don't want to see this warning every time I launch a Terminal window, so, I added this to my ~/.bash_profile (reminder: .bash_profile is used instead of .bashrc on Mac OS X).
export BASH_SILENCE_DEPRECATION_WARNING=1

Set BASH command prompt

Yeah, I have preferences (doesn't everybody?). I set my command prompt to show me which user is active and what directory I'm in. This goes in my .bash_profile:

PS1='\u:\w\$ '

Install brew (MacOS package manager)

There's little life in a laptop you want to use to write code that doesn't have a command line package manager. So... (do this with a user with 'admin' privileges):
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)" 
Reference:  

Install pyenv, pipenv

Reference: I'm starting this section with the reference, because I want to give a HT to Gioele Barabucci for an amusing nod to how convoluted this has been, and an excellent description of how to setup a modern python environment: 
tl;dr version: Segregate all your project dependencies and versioning by using pyenv and pipenv to manage different environments within a project/directory. This is a really nice way of doing things. It gets all the way around mucking about with the OS provided python installs and versions and your own dependency hell.  Use brew to install these tools. 
brew install pyenv 
brew install pipenv

Installing pyenv creates a .pyenv directory under your home directory where it stores and manages different versions of python so you can have them at your fingertips.

 Configure pyenv

We're getting close now. 

pyenv --list # shows you all the available versions of python that are available. There are a lot!
pyenv install 3.8.3 #install 3.8.3 version of python -- latest
pyenv global 3.8.3 # set 3.8.3 as the default version
pyenv init
This last statement, pyenv init, is the magic that connects your shell to the local versions of python installed and managed by pyenv. Really, all it does is adds the .pyenv directory to your PATH with a symlink to the version of python you want to use. Add this to your .bash_profile to have pyenv manage your python stuff by default.
eval "$(pyenv init -)"
Reference:  
https://realpython.com/intro-to-pyenv/   

 Configure pipenv

 pipenv is the glue between pyenv and your virtual python environments. pipenv install creates a file called Pipfile in your current directory or project where it tracks the python version for this project and any dependent packages you install for THIS project and only THIS project. For example:
pipenv install tweepy 
And as the output helpfully suggests: To activate this project's virtualenv, run:
pipenv shell 
Development environment segregation FTW!  

Install pycharm

Honestly, I might keep using vim for the most part--old habits. I may be an old dog, but I want to learn new tricks, so, installing an IDE. Download the installer here:


EOJ


Monday, February 27, 2017

The DevOps Phenomenon -- Chapter 1: What DevOps Means (Part 1 of 6)

Chapter 1: What DevOps Means (Part 1 of 6)


For the last few years I am being stalked on the Internet. A lot of companies have paid to target their DevOps ads to me. I see NewRelic ads entreating, “What is DevOps?”[i] Puppet wants me to participate in their State-of-the-Art DevOps survey. Gartner wants me to know its annual report on DevOps is now available. What is being built around a word that was only coined a short time ago is remarkable. Yet, the word itself, DevOps, has defied an easy definition. Years into the DevOps phenomenon what DevOps means continues to be contested.

So what does DevOps mean? Let’s begin before the beginning and take a closer look at the words themselves. Looking back at the etymology of words is a good way to see where this mash up of ideas called DevOps is headed.

Develop

Develop, the root of the word development, is a lush word. It has lovely variations and often connotes magic. Magic that is concerned with conjuring, making something out of nothing, like “conjuring a table and chairs out of thin air.”

Many other connotations that are not directly referring to magic suggest a mysterious process. It is what is conceived of in the mind and made real. We expect to cause good outcomes, and love the rewards. It is a process and it takes time, but we are undeterred. Develop is suffused with hope and expectation. What we make in the process of development is exciting!  We want the negative made visible. We want to produce solutions, to move forward, to learn and grow. We want to develop!



Figure 6: http://www.wordle.net
Merriam-Webster dictionary definition:[ii]




Operations

Operations is not as popular a word as Develop. It is used about a fifth as often.[iii]  Its use in business and particularly Internet Operations are new usages. Theses usages are often unfamiliar to people outside the day-to-day workings of the Internet.

Figure 7: http://www.wordle.net
Business Dictionary definition[iv]
Merriam-Webster dictionary definition:[v]

It’s most common uses have to do with surgery and machinery. Operate is often about the body. Surgeons operate on bodies. Surgeons wage life and death battles in the body against disease and injury. Operations is concerned with the functioning of systems; their health and well-being; their availability and performance. Surgeons operate on orga machines. Operations works on mecha machines. A transcendent concept amongst all the variations of operate is the notion that things need attention and care. They need maintaining to insure their proper functioning.



So now we have these two words, Develop and Operations. They have been mashed together into the neologism DevOps, but what does DevOps mean?  It means Dev and Ops are two aspects of the same process. They are co-dependent. One does not exist without the other. The development of new software depends on the ability to operate it. Metaphysically (yes, I am going there), if the mind is the seed of the body and the body is the seed of the mind, then Dev is the mind and Ops is the body. Without the mind the body is not animated. Without the body, the mind does not exist.


Figure 8: Yin Yang
I experienced DevOps as an epiphany and conceive of it with more than a little bit of grandeur. My experience is not unique. Many IT professionals have simultaneously experienced this epiphany. This shared experience is the locomotion of the DevOps phenomenon. Of course, it’s a big IT world out there, and when many IT veterans were introduced to DevOps they saw just another new buzzword. Their introduction to DevOps might have been more marketing than thought. They see lots of hype for a re-packaging of how things have always been done. There has been a lot of DevOps hype. Marketers as they are want to do, have often slapped a DevOps label where it doesn’t fit or even make sense.

Every reaction has a re-reaction and DevOps pendants have taken great offense at how DevOps has sometimes been marketed. They take particular offense when DevOps is used as a job title or a department name. They want to protect the idea that DevOps is a new way of doing things. You can’t “buy” a DevOps. They want to protect the DevOps phenomenon. They are overreacting. Even amongst the naysayers there is a grudging admission that the DevOps wave is so wide and large that it will not be stopped.[vi]



Fourk 2: The Principle Components of DevOps
DevOps is a new word for a new way of ordering thoughts and actions. It is expressed most prominently in five primary categories:
1.            Culture
2.            Automation
3.            Monitoring
4.            Communication
5.            Security




[i] http://newrelic.com/devops/what-is-devops
[ii] http://www.merriam-webster.com
[iii] https://books.google.com/ngrams
[iv] http://www.businessdictionary.com/definition/operations.html
[v] http://www.merriam-webster.com
[vi] Add reference

Friday, May 20, 2016

Puppet Bootcamp Boston -- "Puppet, Security and PCI"

I gave a presentation yesterday at Puppet Bootcamp Boston today. The title of the presentation was "Puppet, Security and PCI."  I went into the history of Internet credit card theft and the emergence of the PCI standard to combat the threat.  The conference was at the Revere Hotel which has a fabulous auditorium. The seats closest to the stage were actually couches and had little swivel tables.  And every seat had its own power!  Also, free drinks at the end of the day on the rooftop bar--not bad :)

Thank you Puppet for the opportunity!

A PDF of my slides is here.

Monday, April 11, 2016

DevOps dead? Not so fast.


Andrey Akselrod over at TechCrunch wrote "Managed services killed DevOps." He may think he's covering new ground proclaiming the death of DevOps in his article,  but he's not.  This story has been around for quite a while; tt's almost as old as the word DevOps itself.  Which altogether is a pretty short amount of time.  Here's Mike Gualtieri from five years ago with his take on the death of DevOps: "I don't Want DevOps. I want NoOps.

The panacea of "Full Stack Developers" will not meet the needs. There are many large and complicated systems, that need a lot of attention to keep them running. And these systems are growing (think Internet of Things).   While the bar has been raised for when dedicated Ops is required, it has not gone away, and won't be going away anytime soon

Thursday, April 7, 2016

TOP 8 || BEST Linux Command Line Tools

Here they are... my eight favorite linux command line tools. 

#8) systat

All systems should have systat installed.  Systat is the easiest way to get basic server performance logging running on a system.  Systat includes the utility sar.  Sar tracks system utilization over time.  There are few things more frustrating then trying to determine what happened over night on your server, and you don’t have any data on the performance of the box because sar is not installed.

#7) rsync

Moving large groups of files around is made a lot easier with rsync.  Rsync computes a unique hash of each file you would like to copy and compares it to your target.  If rsync finds a matched hash, the source and destination files are the same, and file copying is skipped.  Skipping files that have already been copied can really cut down the work effort! Running an rsync in cron is an easy way to keep a backup or replica of data files up to date.

#6) telnet

Telnet is a granddaddy of network tools. Telnet and its server side daemon, telnetd, for many years provided remote console connectivity to servers and network gear.  As it passes traffic in plain text including login information, it has rightly been relegated to the landfill for antiquated technologies.  I haven’t connected to a telnet server in at least a decade. However, the client program is still useful.  Telnet to a well know port enables an interactive session over TCP to many common network programs that pass data in plain text, like HTTP, dns, smtp and even SSH.  SSH doesn’t pass interactive data, but most SSHd servers will tell you the version of SSHd you have connected to.  It’s good for simple tests of network connectivity or for passing protocol commands over the command line.

#5) emacs 

I am a vi user myself, but I admire the heck out of emacs. I recall vividly, back in my WebLogic days in the late 90s, setting up a new developer, Anno Langen. Bob Pasker was walking Anno through the environment. What has stuck in my mind was there 30-minute conversation filled with backslapping and high fives as they compared their emacs macros. For all you emcas people out there, I salute you.

#4) lynx

It may come as a surprise that I have included a text base web browser in a list of must have network tools. But having an actual browser in your console is handy. There are times when it is easier to fire up lynx to smoke test the content of a web page without having to exit the console. Command line tools like wget or curl, don’t always do the job. Sometimes you want an interactive session, not just a single get or post.

#3) mtr

Back in the day, traceroute was a common tool for tracing the path of packets from a source to a destination. It relied on ICMP responses to generate its maps. Because these same useful ICMP responses have been exploited to generate denial of service attacks they have been largely turned off on the Internet. Instead of getting a nice list of hosts between you and your target with useful timing data indicating where in the network there are bottlenecks, you get back a lot of no-replies. MTR to the rescue!  Somewhat similar in look and feel to traceroute, MTR relies on UDP responses to generate its network maps. Once again, we can see where the network bottlenecks are.

#2) nmap

NMAP is a very useful tool for scanning networks and ports. It’s a useful way to uncover information about what system is on the other end. If an nmap reveals port 3369 is open, chances are pretty good you are looking at a windows server. Nmap is also a quick and easy way to scan a range of IP addresses to see what’s on your network and how many IP addresses are in use.

#1) nc

nc may be my favorite of any network tool. What makes nc particularly awesome is its ability to open an ad hoc TCP or UDP port on any available port. This is a great way to test network connectivity between systems and networks during the network build phase, before applications are installed. 

So what do you think? Is there a favorite of yours that is not on the list? Please let me know in the comments.


Saturday, February 27, 2016

The DevOps Phenomenon -- Abstract


The DevOps Phenomenon
Continuous Integration and Security 
in the Internet Age
by Kevin Eberman
Abstract
This book is about DevOps. DevOps integrates previously misaligned concerns: Development and Operations. Development teams are driven to continually add new features and functionality to the application. These changes cause instability, which imperils the prime directive of Operations teams—keeping the applications running. DevOps is the convergence between Development and Operations, making the Internet, how it is developed, and how it operates, more efficient, effective, and secure. Amazing convergences are emerging between science, business, culture, and politics; DevOps is one of them. “Talking about music is like dancing about architecture” will no longer be a hallmark of inane comparisons, but a harbinger of new ways of seeing and doing.
The Internet has been the engine of my professional career. I have 20 years of experience in San Francisco and Cambridge at software companies that have helped make the Internet what it is. This book, my story, my DevOps trip, is a microcosm of the Internet during this epoch of the Information Revolution.
“When you come to a fork in the road, take it!”
Audience
Readers of WiredQuartz and InfoWorld.
Ops people, full-stack developers,
software executives, and product managers.
Me.


Comparisons
DevOps for Developers Michael Hitermann. Apress, 2012
The Phoenix Project Gene Kim, George Spafford, Kevin Behr. IT Revolution Press, 2013
Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation Jez Humble. Addison-Wesley Signature Series, 2010
The Painted Word Thomas Wolfe. Picador, 1975


Kevin Eberman 
twitter: @Manager_of_it 
http://manager-of-it.blogspot.com