Zero Trust definition (by ChatGPT)

Lately, many people are heading to ChatGPT to see what answers, posts, reports, and all kinds of other works they can come up with. Some of these results are pretty amazing. So, in an effort to get my blog going again I thought I would post what ChatGPT has to say. I would like to point out that each time you give the same input it provides different or slightly different results. I chose to have it write about Zero Trust for data access rather than ZTNA (Application Access). Reason being is that I believe Zero Trust principles for data is more valuable than focus on the infrastructure side of the equation.

So without further ado Zero Trust by ChatGPT:

Zero trust is a security concept that challenges the traditional approach to cybersecurity by assuming that all users and devices within a network are potentially untrusted. This means that even if a user or device has been authenticated and granted access to certain resources within the network, they are still subject to ongoing evaluation and verification to ensure they remain trustworthy.

The idea behind zero trust is to never automatically trust anyone or anything within a network, regardless of their supposed level of access or privilege. Instead, every access request is treated as if it were coming from an external, untrusted source, and is subjected to strict verification and evaluation.

One of the key benefits of this approach is that it can help prevent data breaches and other security incidents by making it much harder for malicious actors to gain access to sensitive data. With zero trust, even if an attacker manages to compromise a user’s credentials or gain access to a device, they will still be unable to access sensitive data without undergoing additional verification.

Another advantage of zero trust is that it can help organizations more effectively manage and control access to their data. By implementing strict policies and procedures for verifying access requests, organizations can ensure that only authorized users are able to access sensitive data, and that they can only access the specific data they need for their work. This can help reduce the risk of data breaches and other security incidents, as well as help organizations comply with regulations and standards such as the General Data Protection Regulation (GDPR).

Implementing a zero trust approach to data access can be challenging, as it requires organizations to rethink their existing security practices and procedures. However, with the right tools and technologies, organizations can effectively implement zero trust and enjoy the benefits of improved security and better control over their data. Some of the key tools and technologies that can support a zero trust approach to data access include:

  • Multi-factor authentication: By requiring users to provide multiple forms of authentication, such as a password and a security token, organizations can make it much harder for attackers to gain access to sensitive data.
  • Access controls: By implementing strict access controls and policies, organizations can ensure that only authorized users are able to access sensitive data, and that they can only access the specific data they need for their work.
  • Encryption: By encrypting sensitive data, organizations can protect it from being accessed by unauthorized users, even if an attacker gains access to a device or network.
  • Continuous monitoring: By continuously monitoring access to sensitive data, organizations can quickly detect and respond to any potential security threats or breaches.

Overall, zero trust is a powerful approach to data access that can help organizations improve their security and better protect their sensitive data. By never automatically trusting users and devices within a network, and subjecting every access request to strict verification and evaluation, organizations can effectively prevent data breaches and other security incidents.

VPN and ZTNA – What difference does it make?

I know it has been a while, but that’s what happens when your work is very interesting! So I’m back to talk a little more about ZTNA and remote access architectures.

One of the most misunderstood parts of ZTNA is the comparison between traditional VPN deployments and ZTNA services from authentication and what it provides the end-user. From a user standpoint, there may not seem like there’s much difference. Aside from the fact that you have to mess with an app and turn all these knobs and can’t reach your local systems. This is particularly true for administrators and anyone who deploys these solutions. When I talk about VPN or traditional VPN I’m not necessarily talking about the tunneling technologies themselves but the remote access infrastructure that we generally call “VPN” as a product group. So in this post, I’m going to break it down a bit more to help people understand why it matters. I’ll focus on authentication but get into some other areas around it. I’m going to take some liberties here and not be perfectly exact with everything here because I’m just trying to paint an overview picture to make a specific point.

A simplified view of traditional VPN

In traditional VPN the end-user has some client (that isn’t a browser). The client has the capability of standing up a tunnel whether that be IPSec, TLS, or some other tunneling technology. There are some knobs in there to play with, maybe they have to select where you would like to connect. See statistics or other information. Maybe they just get to hit connect and they authenticate. They have a device, maybe a firewall, maybe something else acting as a “VPN Concentrator.”

When the end-user connects to their company VPN they authenticate for the service, the tunnel, generally, a ‘full tunnel’ meaning all of your traffic no matter what it is heads for the VPN endpoint in Company XYZ. Once authenticated they’re an entitled part of the corporate network. They have an IP address provided by corporate infrastructure that is part of the network infrastructure’s IP address plan. If the end-user were to perform some port scan they would discover that they can likely reach and get a response to many hosts on Company XYZ’s network. Through policy and access control the administrator can restrict this in some ways but the fact is that the user is part of the network as if they were on-site. In earlier days we thought this was great! Employees connected as if they’re really there! People also like this because there is easy visibility into what an end-user is doing and how they’re using resources.

Not only can adding/moving/changing policy and access control be arduous and time-consuming we often forget to remove temporary access. Soon the Firewall or VPN endpoint is loaded up with unnecessary config. Wait until you move devices and you have to move all of that over because it’s too much time to walk through and figure out what’s in use and what’s not. Now multiply that across all of the VPN Endpoints/firewalls you manage.

In short, the end-user is part of the same network infrastructure that the applications are generally.

ZTNA/SDP-based remote access models

In this model, there is usually a client as well, though not always thanks to client-less (browser-based) options. However, because these services are a centrally managed cloud service (with few exceptions) the clients have a much lighter weight feel to them. There are virtually no options for the end-user to mess with and they don’t generally have to select the location they need to connect to. This is greatly reduced friction on the end-users part. Plus, because these architectures are usually cloud-hosted the termination is in the vendor of your choice instead of a device you manage.

Though there is some variation between product authentication in a full-client model is something a bit different than traditional VPN. When an end-user authenticates to the service instead of just the service they also authenticate for the entitlement of the applications that have been defined. What I mean is, that an administrator has assigned some applications (here referred to as ) to reach. Once the user finishes logging in they have authenticated to a service that has no Layer 3 defined on it and is generally not full-tunnel at all. I struggle to call this “split tunneling” but it seems to resonate with people. Anyway, The end-user is entitled to access App1 so a route (or packet filter) is installed on the end-point device. Now we have a tunnel of some kind that only allows a flow to/from App1. In some architectures, each tunnel may have a TLS (or other) tunnel of its own for proper micro-segmentation.

At this point, if an end-user were to run a port scan on they would find that (depending on policy) they would only be able to reach App1 on a specific port or set of ports. No additional changes or access control is needed at the network layer to restrict or protect the rest of the network because the end-user is not part of the network. It likely doesn’t even have an IP assigned that is part of the IP address scheme of the company. It likely uses the CG-NAT range in this case but is not directly connected to the network where the application resides. As you add application entitlement layer 3 of the tunnel changes (unless it’s packet filter). As you remove entitlement the reverse happens.

In the client-less or browser-based access, this is much more evident because the end-user opens the browser and puts in the app they’re looking to get to. One could say that this model is closest to the workflow that end-users experience when accessing SaaS applications. Through, internally to the cloud-based service the browser-based access and the full-client likely hit the same SDP which has proxy functionality in it.

Bringing it all together

So to close it off one of the biggest and most misunderstood differences between cloud-hosted ZTNA services and traditional VPN services is the fact that in traditional VPN services you authenticate for entitlement to the service and the network. You have direct connectivity to this network and if compromised have the potential to do some damage. In ZTNA models the end-user is given access to the service and the applications they are entitled to based on some centralized policy. That’s it. Aside from post-auth signaling and some service chaining for further packet processing the end-user can’t really do much damage except for maybe the apps the end-user has access to on the ports the end-user has access to. In most models, with some exceptions that use vNGFWs deployed into the cloud, the end-user isn’t even part of the network that is routable to the applications themselves.

ZT, ZTN, ZTNA, Oh my! (and some other things)

Hello! I’m back with another installment in a series of posts around zero-trust network access. I’ll explain some more of the architectural components (in general) and touch on the difference between the ZT, ZTN, and ZTNA nomenclatures.

So – What is the difference between ZT, ZTN, and ZTNA? Well, Zero Trust is a set of principles such as least privilege, (micro) segmentation, and machine/user identity that ZTN (Zero Trust Networks) and ZTNA (Zero Trust Network Access) build on. Depending on who you are Gartner and NIST go into great detail on ZT and ZTNA. In addition, a variety of vendors have several posts on their position or take on zero-trust and what it means to them.

As far as ZTNA goes – Zero Trust has been coupled with Zero Trust Network Access to make traditional VPN redundant in terms of remote access for the private access to your applications, where ever they are. Thinking of it another way ZTNA includes ZT and has some varying degrees of features with it that are more or less enabled by the fact that you no longer need to define a perimeter for your VPN solution. At this time I think ZTN and ZTNA are used interchangeably, though you can apply zero trust principles on networks in general for ZTN and I think that ZTNA is mostly focused on remote access to applications.

Perimeter-less remote-access solution

The idea of ZTNA is the absence of traditional VPN connectivity without sacrificing the security of the entire enterprise. In fact, one could say it’s saving the enterprise from further exploitation. Notice I didn’t say the absence of tunneling technology. The user experience is the same whether inside or outside of the corporate network. Furthermore, Depending on the architecture users connect more directly to the application you’re hosting instead of a branch and/or HQ/Data Center which may be sub-optimal. This is especially true in cases where there are global enterprises. I know I am not the only one who’s seen infrastructure built such that end-users somewhere in Asia have to connect to a hub or spoke somewhere in the US or EU (or the reverse). Not only that, remote access configurations which are typically ‘full tunnel’ VPN. The five most often referenced cons for full-tunnel VPN are:

  1. Snare the SaaS-based traffic
  2. Direct regular internet traffic to the HQ/branch/DC
  3. Prevent local access to needed resources
  4. Increased costs and usage for all that traffic traversing the network
  5. Having to deal with/think about the client in any way

Moving further, ZTNA it is the complete lack of network-definition in tunnel connectivity that is at the heart of zero-trust network access infrastructure. Technically, split-tunnel happens, but it is at the application level. This generally means packet filtering or host-routes on the end-point. Each application is defined and assigned to users. They are either permitted or excluded from the use of these applications through authentication of the device already so why would you then create an underlay of a network-defined tunnel at all? Doing so is redundant. Even if a service offers a subnet-level definition of the network tunnel customers can easily (and do) define all of the RFC 1918 space in such a configuration to quickly and easily define private space where none is missed. In the end, this is just de-facto full tunnel configuration which negates all of the benefits of a ZTNA service that allows a more direct, but secure access to public and private applications without changing the experience.

But wait, there’s more!

There is one thing I haven’t touched on yet. I bet you’re thinking about your existing investment in all those firewalls all over your enterprise and in your cloud providers right? How do you connect those? What if you want to use them and this service? Well, you can! Migration paths are easy, but here is the nice part. Your firewalls go back to being…firewalls! not Firewalls and VPN concentrators and on and on. Poking holes in your firewalls to allow some of these applications outside of VPN also, potentially, is no longer needed as well.

Look Ma no policy engine

I’m just kidding, of course, there is a policy engine. But when you take a look at a ZTNA service there is little in the way of a policy engine that’s trying to win a drag race. Policy is very simple and relies heavily on your identify provider (IdP) of choice and the MFA/SSO mechanisms you configure. For instance, in a very simplified ZTNA architecture, once you have defined the application and who can access it, under some conditions access can be granted by simply using your Google account as the IdP and you’re in provided you pass.

Tunneling protocols

We can no doubt talk about tunneling protocols and the religion of such all day or several days. However, I do like the simplicity in TLS/DTLS vs IPSec tunnels. I won’t go into why, but I can tell you that almost any ZTNA architecture today is TLS/DTLS based. They are arguably easier to work with and manage than IPsec. Integration into web browsers is easier as well in terms of ZTNA architecture. There are a couple of exceptions to this were some vendors are using wireguard tunnels quite successfully in their service.

The main ZTNA parts that make up the whole

We have a few choices! Each of those choices takes the zero-trust principles and adds their flavor to it to fit a variety of target use cases that are more comfortable to enterprise customers. Below I am showing you the general architectures that you’ll run into. If you’re wondering what an SDP is below it’s called a Software Defined Perimeter. Think of it another way this is where the TLS/DTLS tunnel generally terminates and it is where enforcement happens. These are usually multi-tenant systems that are closest to the end-point, and if there is a connector, closest to the connector.

In general, there are three main parts to the architecture:

  1. You have a client, hopefully, lightweight and non-intrusive. In fact, you shouldn’t have to think about this client at all. 
  2. You have the SDP/Enforcement node(s)/proxy
  3. You have a connector that is also a reverse proxy of some kind. The connectivity for this is ‘inside-out’ meaning that it calls home. The effect is that your applications are invisible from the public internet. Why is this important? Attack surface. You don’t have to make any ACL exceptions to host this.

Some variations of this model are as follows:

ZTNA Architecture 1 – Client + Connector + SDP

  • End-point client – a lightweight, probably centrally managed
  • Connector – Usually a reverse proxy VM that is deployed inside the data center or IaaS
  • SDP – Software Defined Perimeter – and enforcement node that handles authentication, proxying of traffic, stitching of tunnels

ZTNA Architecture 2 – Clientless + Connector + SDP

  • No client on the end-point. Traffic acquisition handled in a variety of ways such as DNS/CNAME
  • SDP – Software Defined Perimeter – Honestly, the SDP/proxy/enforcement node is a pillar of any ZTNA service so I expect to always see this (and so far do)
  • Connector VM or connector process. There are two flavors here which are either a full VM or a service/process, that is installed on the application server to run proxy for it. Either way, these connectors call home to the service. So again no ACL changes are needed for this.
  • Could be a great ’third-party access’ use case

ZTNA Architecture 3 – No connector and no client, yes has an SDP

  • No client or at most some integration into a browser
  • No connector – Access to private applications can be handled by very limited exposure. When the SDP proxies the private access traffic the source IP is generally the service provider’s IP. ACL changes only within a specified IP range of the provider. Still invisible to everyone else on the public internet.
  • SDP still exists, handles traffic acquisition enforcement, authentication
  • Could be a great ’third party access’ use case

There are so many great use cases for ZTNA, but probably the most exciting one is the third-party or B2B use case. I’m going to save this part for another post, but imagine giving a third-party or a B2B process where you didn’t have to worry about setting up lots of vpn config, ensuring that you’re secure, or since most people I have worked with don’t want to or can’t risk offering it all, are able to do so in a secure way that doesn’t put them on your corporate network. You know what I just realized that I haven’t even gotten into how RBI (Remote Browser Isolation) can be combined with these services for enhanced third party access. So I’ll talk a little bit about how that looks as well.

If you have questions reach out! I love talking about the transformation of remote access. Effortless use, better security.

Hank

Zero Trust Network Access – The journey continues!

For the next part of my ZTNA (Zero Trust Network Access) series, our journey continues around all things ZTNA space today. There are probably many questions, and some are probably unsure what all this Zero Trust stuff is and how it relates to networks or remote access. I’m going to start with what is, to me, the simplest and most well thought out architecture when it comes to combining dead-simple, easy to use, access to applications whether they are public or private. Don’t worry though, I will get to the variety of architectures that have formed all the way back in 2009/2010.

First, some history. Even before the pandemic, a shift has been happening. People with certain roles and responsibilities have been able to work from home at least part-time for many years now. In fact, I had a 3-year stint as a full-time WFH employee in my AT&T days probably 10 years ago. When we all worked in the office and only a few worked from home or did so part-time perimeter security probably made more sense. Today, a lot more people are home. Not only that, depending on your enterprise you’re either full cloud-based, partially, or thinking about it. Now the enterprise perimeter is stretched to more places requiring bigger and bigger security stacks.

Beyond Corp – The most talked about and well-researched security architecture of our time (in my opinion).

For anyone that hasn’t taken the time I invite you to read the research for the Beyond Corp model. It is enlightening and eye-opening. Probably my favorite part is the user education an experience story located here -> https://storage.googleapis.com/pub-tools-public-publication-data/pdf/c8da594124dab1f91e6750995e2b7805403b19f1.pdf <- From new hire through the entire life cycle of employee experience. You’ll notice that people swear they need VPN (Virtual Private Networks). They try to explain through a variety of ways that what they’re doing requires connectivity to the corporate network. It is an amazing tale of determination and education on the part of Google to wrest unnecessary VPN connections from their employees. If you don’t read anything I highly recommend you start there. It’s like one of those stories where you wonder why everyone always runs towards the danger when they hear something on the other side of the door. You want to scream DON’T OPEN THE DOOR.

In or around 2009 after an attack Beyond Corp declared that they were going to move all of their applications to their internet. I have no doubt that this strikes anxiety and/or push back in many in the networking and security space. “But Hank”, you say, I can’t and/or don’t want to move my applications to the internet. That’s Google, and we’re not Google. I hear you! Though, at the same time, I know if we dig deep into what we’re securing and why we will come to the reasonable conclusion that we have stacked security tech up to the Nth degree and still, once a threat is inside it can be hard to stop. Google formed their Zero trust network infrastructure the way it worked best for them. In the last couple of years, other companies have taken what has been done with Beyond corp and made it their own. One of the key points with the Beyond Corp mode is that devices are managed devices. Those managed devices are secured and monitored and so in this model, the IT administrators can restrict access via a variety of authentication methods to private applications that reside on the open internet. If you like you can read more about the attack in their research documents.

The model is so simple it could be confusing to some or hard to wrap your head around it. I once heard the phrase “It’s so easy, it’s hard” used and I have long adopted that and applied it to so many other things. I think the problem here is that we’re looking at all that we’re taking away and focusing on that. We may not be considering what we’re gaining instead. The model of a safe internal corporate infrastructure is dead or dying. This has been proven time and again through a variety of attacks and breaches which, once breached, the attacker potentially has the run of the internal network. This is not unlike any movie based on most wars we have ever seen. Large powerful walls protect the citizens. Once breached everything is on fire. I’m thinking Troy or Kingdom of Heaven apply here.

I bet you’re wondering – Do I have to change everything at once? Nope! Phased-migrations are possible! The elegant attribute around the Beyond Corp model is that you can run it alongside your existing infrastructure and migrate over as needed in phases. Though as we take our journey together we’re going to find that this is largely true for any of the ZTNA architectures we have today.

If you take anything away from all of this is that It shouldn’t matter where your applications are. The user experience should be largely the same. Security doesn’t have to be nearly as complicated as it seems to be. Sometimes it almost seems like we stack the deck higher and pile on more because we think that’s how we win. In my next post, I will begin to dig deeper into the variety of other architectures that exist, and maybe after that, we can dig into the use cases that have come out of all of this. I’m also going to answer the question – “What is the difference between ZT, ZTN, and ZTNA?” If you want more information on the BeyondCorp research and architecture please go here -> https://www.beyondcorp.com <-.

The Explosion of Zero Trust Network Access

In the last couple of years, the tech world has been buzzing about Zero Trust. Every month or so there seems to be a new product with the zero trust label in it. It’s almost like seeing those gluten free labels everywhere. Since 2019 or so remote access has been undergoing a transformation in the form of Zero Trust Network Access or ZTNA. ZTNA is something that has been a big part of my work for the last couple of years and it is a very exciting time to be in the Cloud Security space as a result. In fact if you take a look at SASE and its components ZTNA is front and center as a pilar of SASE architecuture. One of the things I run into most often is a lack of education around what it is and where it fits. In fact, without diving deep on this topic it is easy to declare that this is nothing new or nothing special. This post, and potentially a few more after will attempt to explain the nuances around ZTNA and what it means for perimeter security. I will also dive into some of the architecture that makes up ZTNA.

Another thing I run into a lot is the push back against zero trust remote access solutions. I imagine because we’re so heavily invested in perimeter security that the castle and moat are all we know. So something like ZTNA comes along and people probably imagine some free-wheeling insecure architecture that isn’t as secure as the giant security stacks that are currently implemented in most places. In future posts, we’re going to dive deeper into why this isn’t true. Spoiler-alert: When you imminent a well designed and thought out zero-based infrastructure some of the traditional security stack is generally redundant.

Before we go any further, a word on perimeter security. What you may have today for your remote access is a client that you authenticate to the corporate network. Once there you are assigned an internal corporate IP address and like magic, you’re now part of the corporate network as if you were there in the office. Sounds great right? Nope. As part of the network in a perimeter security design, you and/or your device can move laterally across the network infrastructure. If you are infected the potential to spread that to the rest of the company is enormous.

Several write-ups declare the death of VPN as a result of ZTNA eating the remote access world. I agree with this message. Hear me out. This isn’t to say that tunneling technologies are going away or that all VPN is going away. Sure, there will be use cases where traditional VPNs will continue to be needed. What the death of VPN is really referring to is the need to have a tunnel between an end-point device and a network where the endpoint is completely part of the network. You might have heard this type of tunneling referred to as ‘full-tunnel’ or even ‘split tunnel’ configured remote access.

In addition to combining zero trust principles with remote access to offer enhanced, continuous protection there has been a huge push for a better user experience. I can’t think of a single developer/engineer that wants to think about their remote access solution. Whether they are connected, or not. Whether they are supposed to disconnect or reconnect depending on where they are. People just want to open up their devices and work. So what we need are solutions that create an easy-to-use experience that is secure whether inside the corporate network or not. ZTNA attempts to bridge the gap between dead simple to use access to private applications with the secure access that IT administrators expect.

Despite marketing efforts, zero-trust itself is not a product, but a set of principles in various products that fit a variety of use cases. The philosophy around zero-trust is no device or user should be trusted once inside the network. Such is the case with perimeter-based security. Since we’re discussing ZTNA the principles as they relate to remote access with zero-trust are, in general, the following:

  • Least-privileged access between users, devices, and workloads
  • Micro-segmentation at the application level regardless of network segmentation
  • Application visibility (debatable) – In this case, the lack of visibility to the open Internet
  • Multi-factor authentication also known as MFA
  • Device identity and, additionally, service, application, and process identity

The last entry “service, application, and process” identity is new for 2021 in terms of the evolving architecture of ZTNA. The other components have been part of a good ZTNA architecture for some time now.

In 2019 there were only a handful of companies that offered ZTNA services. Since then and thanks to the pandemic ZTNA services have seen explosive growth. In fact, as of January 2021, there are now 15-18 organizations jumping in to offer services and try to differentiate themselves from the rest of the space. There have also been some acquisitions along the way. Sometimes it is almost as thought I can’t even keep up. The research in this space is never ending and constantly evovling.

With this kind of explosive growth in the zero-trust security space, it is no wonder a ton of different architectures has emerged. In 2019 Gartner published a general architecture for ZTNA and to no surprise, many vendors have mirrored that architecture. Some, however, have pushed boundaries to offer innovative approaches to providing invisible to the user access to private applications.

In future posts, we’ll dig into these architectures and I will offer my thoughts on the best of them.

Hank

cloud-init booting vManage

Automating Viptela SDWAN controller deployment:

Hello – One of the areas that I do a lot of work these days is in experimentation of various technologies. I needed a way to quickly deploy and build Cisco Viptela SDWAN environments whether that be in AWS, VMware, or KVM. Like many, I started searching for and looking for all the ways this could be done. As you might imagine, I found a lot of blogs and information on the topic.

My plan was to use terraform to help me deploy the controllers and edge devices in an automated way. Sure, there are plenty of ways to do this, but each of these ways left me thinking something was missing. So this post is less about the ways we can complete this task and more about what I did *not* find along my journey.

We’ll start with vManage. For the unitiated, the vManage is the controller responsible for network management, among other details. Managemetn of itself, of the SDWAN network, and of the controllers and edge devices. A key difference that you’ll notice when deploying vManage is that you have to have a second storage device attached. That storage device is for the database that will be created and used and must be at least 100GB according to the prevailing documentation. When you start up the controller instance you’re prompted to log in. After you log in for the first time you’re asked to select a secondary disk, then asked to format this disk. vManage then reloads and carries out the disk addition and formatting.

This is my focus today. That’s a very manual process. One that I would like to avoid in my project for short term build needs. You may have noticed that throughout the documentation related to vEdge you can generate a cloud-init file to bootstrap the edge device. So the question becomes – Can you use cloud-init for vManage?

Yes – you can. First, let me tell you that cloud-init isn’t new. Not only that cloud-init user-data is widely used in deploying instances in AWS as well. While you don’t have the luxury of having one generated for you, we’re going to talk about how we can bootstrap the vManage build process so that the questions asked on first boot are answered. We’d also like to be able to configure as much as makes sense while the vManage is booting up as well.

We’re going to start out by creating a file called ‘user-data’. The file might look something like the following. We’ll examine the parts to better understand them.

Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version: 1.0

--==BOUNDARY==
Content-Type: text/cloud-config; charset="us-ascii"

#cloud-config
vinitparam:
 - format-partition: 1
 - vbond: 10.10.10.10
 - host-name: vmanage
 - org: "XXXXXXXX"
 - rcc: True

ca_certs:
  remove-defaults: False
  trusted:
  - |
    -----BEGIN CERTIFICATE-----
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    -----END CERTIFICATE-----


There are two parts to user-data. First, thing to note is these files are essentially YAML formatting. The first part is cloud-config. Sometimes you may see a file name associated in cloud-init files, but you won’t need it here. In this first section is where you define the initial parameters. As you can see there are a few things we can do here. First right off the bat “format-partition” set to 1. This takes care of being asked what disk to use and if we want to format it.

Some of the other parameters are self explanatory, org, vbond, etc. But you can also add a certificate to be added to the deployment as well. Notice that the boundaries of the parameters for cloud-config are defined as ‘boundary’ and set as such going forward into the user-data file. They can be anything, doesn’t have to say boundary.

Now that we have that out of the way we’ll explore the second part of this multi-part file. The second part is cloud-boothook. Note that this second part below is directly below the cloud-config part above.

--==BOUNDARY==
Content-Type: text/cloud-boothook; charset="us-ascii"

#cloud-boothook
!
system
 system-ip 10.10.10.250
 site-id 100
 host-name vmanage
 sp-organization-name "Some org"
 aaa
  user admin
   password admin
!
vpn 0
 ip route 0.0.0.0/0 10.10.20.1
 no interface eth0


 interface eth1
  ip address 10.10.20.20/24
  no shutdown

  tunnel-interface
   allow-service all
  !
--==BOUNDARY==

In this next part, the cloud-boothook, is where we can define the configuration that gets applied to vmanage. As you can see this is standard CLI config so anything here is simply run at boot when the config is loaded. in essence, you can take care of completely configuring the vManage without having to have manually configured anything so far. Convert this file to an iso image and attach it as a cdrom and away you go!

One thing to note is that cloud-init bootstrapping works in 19.x and 20.x code trains for Cisco Viptela based SDWAN controllers.

In my next update, we’ll talk about the vSmart and how to format the cloud-init for that controller.

Be safe and be healthy.

It’s been a minute..

This is probably my 3rd, maybe 4th attempt at a blog of some kind. I start it and then, I get anxious about continuing it. The blogging landscape in I.T. is tremendous. One can easily wonder what differentiator they can bring to the blogosphere.

So, instead, I am realizing that this blog isn’t to gain some following, its to write, think, and share my thoughts even if in the end its just with myself. But if my thoughts help others than that is a win.

So, is what I’m thinking about today:

There is a never-ending number of blogs that ask the question “is the CCIE worth it? Has the CCIE lost value?” With many variations on the same question. We end up with more or less two camps. One camp that praises the CCIE and another that criticizes it for a variety of reasons. Yes, people do VERY well with zero certifications. It literally comes down to “you do you.”

First, let me explain that I reference the CCIE here, but what I really mean is <insert your favorite high-level certification here>

Well, I’m going to take a different approach. I got my CCIE in 2008 which seems like a lifetime ago now. Since then I have grown in ways I didn’t think about in 2008. Yet, I still held on to the CCIE because I put my heart and soul into it like many other people before me. Let me say for the record that, for my career, it was absolutely worth it. I got a lot out of it. More than I imagined.

So for the long-time CCIEs, I want to talk to you about the unthinkable. The time when you stop referring to yourself with the CCIE.

Hear me out.

There will be a time when you have moved up the stack far enough that people referring to you and your CCIE together can hurt you, depending on your path. For me, that time is at hand. Mainly because the places I’m going in open networking, zero-trust network access, posture and policy aren’t confined to tests. In fact, in my 18 months at Cisco, I have been working in Cloud Security embedded in an infrastructure that is 100% cloud-native. I have been privileged to be part of teams and experimentation projects that have stoked the same type of fires that I had when I first started in networking. The CCIE is like a platform that once reached you dive off from into bigger and bigger pools. Pools you won’t see yet until you achieve the level of knowledge around which the CCIE tests. The platforms to jump off of never end.

Remember that the CCIE tests a subset of knowledge for the networking professional. This “subset” feels massive on the journey of course. Many CCIEs also have expertise in Cisco products and protocols that aren’t necessarily part of the lab, but make the person holding the CCIE more rounded.

For those people on the journey of the CCIE, I invite you to build knowledge around what you have learned on your journey over and above the CCIE blueprint. You’ll be happy that you did. In 2020 that means a deep understanding of IaaS. Learning a language, learning containers, Learning Linux and especially learning Linux networking. I don’t recommend doing it all at once.

For the Networking professional who has attained the CCIE, I invite you to realize that it is just another platform to another level, a greater abyss. Realize that you will have to eventually shed the CCIE title if you see that keeping yourself anchored to it keeps you from growing.

Shedding something that you are well known for can be difficult. You’ll know it’s time when you hear someone say “Hank the CCIE and networking SME.” In the company of people you aspire to grow to be more like you, like me may suddenly feel that tug from the anchor.

None of this is bad. No one reading this should be thinking about any kind of negative connotation. Shedding former lives is a good thing. Growth is a good thing. Don’t let the F.O.M.O. associated with shedding association to a certification keep you from reaching new heights and new areas of discovery.

Certs are amazing things to many people (and nothing to others), but don’t let that be the end of your story.

The struggle in learning new things

When I was going through my degree program I was having a hard time with the exams so I went to see an advisor to get some help. Through our conversations I realized that I was getting a lot of help along the way from people, the professor, and the internet. That help included seeing not only the answer, but the steps. I would then look at how the answer was derived. then I would type/write out the problem. The advisor metioned that this mode of ‘learning’ I was stuck in was the problem. In fact she said “you’re not learning anything” and you know, I wasn’t. I wasn’t learning because I was parroting what I was shown so I wasn’t trying to discover the path on my own. In essense, there was no struggle. I had fallen in to this trap where I just wanted to get the work done, get the grade and move on. The reason I had fallen into this trap was that I was in a hurry. I just wanted to make it to the end. That’s what makes sites like stack overflow and others so good and so bad.

After a bit of soul searching I realized she was right. I changed my habits and I refused to allow myself to receive too much help. Yes, you can get so much help that it is detrimental to your growth.

People like to be helpful and educate others I get that. Some people are super passionate about it. That’s a great trait, but there are a couple of downsides to this. One is that the learner believes that getting the answer and memorizing the answer is learning. It’s not. What if the conditions change? Then that answer is no longer accurate. The other is that teachers want others to succeed so much that they will literally take the struggle away from the person they’re teaching. Will the learner pass a test? Sure. Have they acquired hard experiences through failure and success in the struggle to learn? Nope.

A great example of this is what we learn in the IT Networking space. When you start out one of the first things someone may tell you to do is memorize the entire table of numbers of hosts/numbers of subnets. Why? Because it’s on a test. So you memorize it. Great, you get the answer right. Do you think you are now an expert on subnetting? No. Because you didn’t take the time to learn the fundamentals beneath subnetting. Do you want to be an expert on subnetting? I hope not.

I’ve been around networking as either an architect or an engineer for a long time. Do you think I have numbers of hosts/numbers of subnets memorized? Nope. Do I feel like someone else who does is better than me? Nope. Why? Because I have learned mechanisms to derive the numbers instead in a way that clicks in my brain not how someone else learned it.

Senior people in IT should absolutely pay it forward by teaching others if they want to do so. Be cognizant of the way you’re teaching. Don’t give people the easy button. If you write a blog post with instruction realize that your method may only work in your conditions. Your instructions should contain more than just the steps. It should contain content that invokes thought for the person reading it.

My advice to anyone learning something new whatever that is. Struggle with it first. If you’re stuck – flip the problem on its head. Come at it from a different angle. Leave the problem and come back to it. Then ask for guidance, but don’t cheat yourself. Don’t seek the answer through that guidance. Seek the knowledge to help you figure your new world out.

Hello.

I’ve mostly disdained the idea of a technology blog. There are so many out there. So many ideas and so many purported thought leaders. I’ve always had this idea in my head that everything, good and bad, is covered by someone in some way. How does one differentiate themselves? I finally have some ideas on how.

It seems that everything can be and is delivered as a service so why not me? Hank as a service.

As I get started and try writing with some consistancy I think I’ll start with my thoughts for the day. Maybe based on something that interested me in the technology space.

Today I’m going to talk about studying for exams. There are seemingly infinite ways to study. There’s always at least one person trying to tell you the best way to study. We all learn learn differently so there is no cookie cutter way that’s going to work for everyone. Certainly you should keep an open mind and listen to what others have to say, but what it really comes down to is what works for you and what is going to make that lightbulb go off in you head.

I remember when I was learning to work in binary for the first time. It seemed insurmountable. There were what seemed like 20 ways to convert binary. There was the 2^8, 3^8…. method, the places method, 128, 64, 32, 16, 8,4 2, 1 and the realization that counting in binary (base-2) is the same has counting in decimal (base-10) which we spent our entire childhood learning in school. For me, that realization is what did it. I finally understood binary in a way I hadn’t thought possible before.

Getting the right method is especially important when it comes to studying for technology exams such as vendor certifications. You may be a visual learner, a reader, a doer or a combination of any of them. If you start down the path of forcing ‘the best way’ you’re going to make studying harder than it needs to be.

Take a moment and try to discover what works for you. Try them all even the methods you think are less likely. I think you’ll be surprised at what you discover about yourself.

Got a comment? Hit me up on twitter @HankYeomans

-Hank