<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Architecture on Problem of Network</title>
    <link>https://6364c9bf.problemofnetworkdotcom.pages.dev/tags/architecture/</link>
    <description>Recent content in Architecture on Problem of Network</description>
    <generator>Hugo</generator>
    <language>en-gb</language>
    <lastBuildDate>Mon, 22 Jun 2020 12:51:00 +0000</lastBuildDate>
    <atom:link href="https://6364c9bf.problemofnetworkdotcom.pages.dev/tags/architecture/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Software Defined Waffle with a gitops topping</title>
      <link>https://6364c9bf.problemofnetworkdotcom.pages.dev/posts/software-defined-waffle/</link>
      <pubDate>Mon, 22 Jun 2020 12:51:00 +0000</pubDate>
      <guid>https://6364c9bf.problemofnetworkdotcom.pages.dev/posts/software-defined-waffle/</guid>
      <description>&lt;p&gt;Over the last two years or so, I have been on adventure with Data Centre Infrastructure renewal. As past posts may allude to, ACI was a big part of what we did, but before anyone gets all dogmatic about it, know that we didn&amp;rsquo;t go &amp;ldquo;All in&amp;rdquo; with that one product, since I personally don&amp;rsquo;t subscribe to the &amp;ldquo;DC Fabrics cure all ills&amp;rdquo; mantra.&lt;/p&gt;&#xA;&lt;p&gt;CLOS fabrics and the various approaches to overlays within them are great at providing stable platforms with predictable properties for speed, latency and scale. Unsurprisingly, they go on to do a great job in server farms that can make the best use of that flexibility. During recent conversations on DC refresh, our Arista friends have been extremely keen to try and get us to run our Internet BGP border on the fabric as well. The 7280SR2K can handle 2M routes in FIB they say, just lob stuff into a VRF, bit of policy and voila. Yeah.&lt;/p&gt;</description>
    </item>
    <item>
      <title>ACI: Initial Design Considerations</title>
      <link>https://6364c9bf.problemofnetworkdotcom.pages.dev/posts/aci-initial-design-considerations/index.md/</link>
      <pubDate>Mon, 18 Jan 2016 11:49:00 +0000</pubDate>
      <guid>https://6364c9bf.problemofnetworkdotcom.pages.dev/posts/aci-initial-design-considerations/index.md/</guid>
      <description>&lt;p&gt;ACI brings with it many different constructs for operating networks, some of which have analogous equivalence with classical networking, some of which are literally bat-poop crazy.&lt;/p&gt;&#xA;&lt;p&gt;As per usual, you can find lots of resources on how to structure ACI fabrics elsewhere, i&amp;rsquo;m not going to waste time on what you &lt;em&gt;can&lt;/em&gt; do and focus on what I am going to do (roughly).&lt;/p&gt;&#xA;&lt;p&gt;The below Image was unceremoniously stolen from Cisco themselves, in the critical read &lt;a href=&#34;http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/aci-fundamentals/b_ACI-Fundamentals/b_ACI-Fundamentals_chapter_010001.html&#34;&gt;ACI Fundamentals&lt;/a&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>ACI: Mini Rant to INSBU</title>
      <link>https://6364c9bf.problemofnetworkdotcom.pages.dev/posts/aci-mini-rant-to-insbu/</link>
      <pubDate>Fri, 15 Jan 2016 14:34:00 +0000</pubDate>
      <guid>https://6364c9bf.problemofnetworkdotcom.pages.dev/posts/aci-mini-rant-to-insbu/</guid>
      <description>&lt;p&gt;Before I get too wound up I should probably say that all of this was directed to my friends there first, and whilst I won&amp;rsquo;t say much about their thoughts, I don&amp;rsquo;t think this is particularly new to them, or out of place.&lt;/p&gt;&#xA;&lt;p&gt;I have a fondness for ACI.  I think its innovative, and once you break through the naming conventions and the terminology, it&amp;rsquo;s exactly what I think Enterprise should be doing in terms of Next Generation Networking.  That said, INSBU are not helping themselves penetrate the market, and as such, are putting themselves at risk of falling behind to Openstack.&lt;/p&gt;</description>
    </item>
    <item>
      <title>ACI: Rack &amp; Stack</title>
      <link>https://6364c9bf.problemofnetworkdotcom.pages.dev/posts/aci-rack-n-stack/</link>
      <pubDate>Fri, 15 Jan 2016 12:31:00 +0000</pubDate>
      <guid>https://6364c9bf.problemofnetworkdotcom.pages.dev/posts/aci-rack-n-stack/</guid>
      <description>&lt;p&gt;Plumbing ACI is something that YouTube has you covered on.  I wont reinvent that wheel.  For the initial standup, I am doing the bare minimum connectivity; each leaf has one 40G uplink to each spine, meaning, 80G of North/South Bandwidth.  This will double up when we are preparing for Production service, matching my UCS/FI Bandwidth between each Chassis (4x10G links to each side of my 2208XPs).  My 3 APICs are configured as follows:&lt;/p&gt;</description>
    </item>
    <item>
      <title>ACI: The Setup</title>
      <link>https://6364c9bf.problemofnetworkdotcom.pages.dev/posts/aci-the-setup/</link>
      <pubDate>Tue, 12 Jan 2016 12:30:00 +0000</pubDate>
      <guid>https://6364c9bf.problemofnetworkdotcom.pages.dev/posts/aci-the-setup/</guid>
      <description>&lt;p&gt;On Friday last week we rolled out our ACI solution into one of our DCs. The setup is simple, comprising of;&lt;/p&gt;&#xA;&lt;pre&gt;&lt;code&gt;2x Nexus 9336pq &amp;quot;Baby&amp;quot; Spines&#xA;4x Nexus 9396px Leaf Switches&#xA;3x APIC Controllers&#xA;2x ASA 5585x Firewalls&#xA;&lt;/code&gt;&lt;/pre&gt;&#xA;&lt;p&gt;The compute behind it is UCS based and we have F5 LTMs in the ADC role.&lt;/p&gt;&#xA;&lt;p&gt;Over the weekend I provisioned it.  That did not go well.  Today I had to go back and revisit the cabling, and then the Fabric initial setup, and then redid the entire thing from scratch again. Oops.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The ACI Adventure Begins</title>
      <link>https://6364c9bf.problemofnetworkdotcom.pages.dev/posts/aci-adventure-begins/</link>
      <pubDate>Sat, 09 Jan 2016 18:21:00 +0000</pubDate>
      <guid>https://6364c9bf.problemofnetworkdotcom.pages.dev/posts/aci-adventure-begins/</guid>
      <description>&lt;p&gt;Starting yesterday I began to deploy our Nexus 9000 ACI solution into our Datacentre. Scary yet fun times are ahead.&lt;/p&gt;&#xA;&lt;p&gt;Over the course of the project I will do my best to chronicle anonymised info that talks about what we did and how we did it.  Some of that may be of use to another ACI hopeful, whereas some will be pretty specific to my environment.  One thing I won&amp;rsquo;t be doing is reinventing the blogging wheel, and I will chose to refer to others that helped me, rather than rehash the same subjects over and over again.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The SDN Conundrum</title>
      <link>https://6364c9bf.problemofnetworkdotcom.pages.dev/posts/the-sdn-conundrum/</link>
      <pubDate>Fri, 22 May 2015 02:48:00 +0000</pubDate>
      <guid>https://6364c9bf.problemofnetworkdotcom.pages.dev/posts/the-sdn-conundrum/</guid>
      <description>&lt;p&gt;Oh how the world has changed since I started out in the wonderful trade.&lt;/p&gt;&#xA;&lt;p&gt;We used to have VLANs and subnets; switches, routers and firewalls.  People would moan things didn&amp;rsquo;t work and we did a traceroute to figure out why.  We would bash out a fix, and if it broke, we would bash out another.  It was the wild west, and that was fun.  Cowboy hats were standard issue.&lt;/p&gt;&#xA;&lt;p&gt;Then along came the bad guys, and with them, the policy doctors.  Changes became more structured and requirements became more complex.  Environments spiraled out into wider geographical areas and management became less about break fix and more about tightly structured architecture.  The industry responded with protocols and toolchains, each with their own use case, and bit by bit, the sector split up into the key areas of WAN, DC and Campus.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
