• Breaking News

    [Android][timeline][#f39c12]

    Sunday, February 2, 2020

    Digi bought Opengear Networking

    Digi bought Opengear Networking


    Digi bought Opengear

    Posted: 02 Feb 2020 07:13 AM PST

    https://www.channelpartnersonline.com/2019/11/08/digi-expanding-market-reach-technology-with-opengear-acquisition/

    At my last job we had an Opengear and a Digi console server plus some Cyclades, and the Opengear was way nicer. Digi seemed very similar to Cyclades, was ok. At current job we've been replacing Cisco 2900 console servers with all Opengear and just started using Lighthouse which is awesome, especially with how it uses cellular as a backup path. So hopefully Digi knows what they have.

    submitted by /u/telestoat2
    [link] [comments]

    Palo Alto's in AWS, can't really seem to wrap my head around the public facing side.

    Posted: 02 Feb 2020 09:13 AM PST

    I have a product that I manage the network services of that has currently been migrated to AWS. Without getting too much into details from a OPSec Perspective, the product has about 200 servers that are internet facing with 0 firewall control.

    Because AWS manages elastic (public IPs) for you, you essentially just get a range of public IP's. Of these 200 servers, none of their public IP's are in the same subnet.

    I've PoC'ed a Palo Alto to provide firewall for north south traffic, but I just can't seem to figure out how to route this (myriad of 200 individual IP's) to the firewall from the internet. There's boatloads of documentation, but all seem to really grace over this fact. If you were to have 1 or 2 webservers that can sit behind a public load balancer all seems to be okay, but just the general idea of NAT'ing your public IP's to servers internal has seemed to be lost with cloud infrastructure.

    Any idea's?

    Again, routing traffice outbound to the internet is fairly easy to get through the firewall, but:

    Let's just say for the sake of simplicity: I have 200 servers that I need to SFTP to from the internet. How do I set that up in AWS given all their public IP's are scattered through IP ranges, to route through my virtual Palo?

    submitted by /u/Digital_Native_
    [link] [comments]

    VSX - VxLAN Implementation

    Posted: 02 Feb 2020 02:22 PM PST

    Anyone successfully deployed an active/active datacenter using VSX - VxLAN. With BGP as the routing protocol between checkpoint Firewalls and Nexus Switches or any other routing protocol.

    submitted by /u/agentjoks
    [link] [comments]

    What makes a good datacenter switch?

    Posted: 01 Feb 2020 08:52 PM PST

    I have wondered what really makes a good datacenter switch for a while. Embaracingly, I don't understand why some switches are better than others in the datacenter.

    I get the need for things like L3, port speed, and larger TCAM (MAC addresses/routes).

    What about buffer size? Any other important factors I'm missing?

    Also, why pick a nexus over catalyst? Why hpe comware over an aruba 3810? Incert other comparable brands here. (Sorry if I left out your favorite brand. Those were just the two examples I could think of off the top of my head)

    What else have I totally missed?

    (Sorry for spelling mistakes. I posted from mobile)

    submitted by /u/met3_1
    [link] [comments]

    What are your thoughts on "switch independent" teaming?

    Posted: 02 Feb 2020 10:04 AM PST

    So it seems like pretty much every vendor on the server side is now pushing this idea of using "switch independent" teaming, or in other words: it's a LAG on the server side, and on the switch side, it's not a LAG. The switch has no awareness that these ports are bundled together. No LACP, no Channel-group/Aggregated Ethernet, etc.

    As a network guy, I really find that really off-putting. It just feels like an invalid configuration to me. In my experience, if you're bundling ports, it's ALWAYS been LACP on both sides.

    Microsoft is running with this "SET" (Switch Embedded Teaming) configuration. VMWare seems to have their own flavor as well. I've read the white paper on MSFT SET teams, and it honestly seems really wacky. It talks about VM's forming an "affinitization" with specific host ports, and other craziness. At the end of the day it all just seems to work by using different MAC addresses to transmit data, and a different MAC to bind to ARP and receive return traffic.

    One vendor our server team brought in actually made the comment that "LACP is old technology. You don't want to use LACP in a data center, that's outdated." It blew my mind, because I've honestly never heard that before.

    So... is this the new norm? Am I the one who is behind in the times? Or is LACP still the way to go?

    submitted by /u/NetworkDoggie
    [link] [comments]

    Nexus 9k switch LST alternative

    Posted: 02 Feb 2020 10:36 AM PST

    We have deployed Nexus 9k switches and VxRail behind those switches in our data centers.

    Each VxRail is dual homed to two switches.

    When the uplink of one switch fails traffic is black holed onto the switch as the Vswitch in VxRail doesn't know about this failure.

    I could find that there is no Link State Tracking feature with Nexus 9k ( Only upto 5k).

    Any option other than running an EEM script on Nexus?

    P.s. This is a pitstop topology before we migrate everything to ACI which is awaited for our passive infra procurement.

    submitted by /u/vivekamath
    [link] [comments]

    I've been planning some upgrades and I'm looking for thoughts and opinions

    Posted: 02 Feb 2020 12:09 AM PST

    Preface: I was "gifted" a network over a year ago. The SM fiber between the sites has always been there because I managed a department at that site as it required CJI compliance. I adopted it from a team of two, which one was arrested for distributing and taking meth and the other committed suicide about 4 months later. It has been a nightmare but I'm making a lot of progress with it. Because of security risks/reasons, these are still two totally different networks with just a few firewall rules between the Cisco 5525x and Cisco 2110 for me to manage them. At this time it's still two separate forests, two internet paths, two independent firewalls, two core/distribution layers ect. I had to move all the servers out of Site D because it didn't meet my standards for a server room. There is only a UPS with a 20amp A and B side. A split forced air unit that wasn't on backup power. The first 6 months I was managing that network the site lost power 4 times, 2 of those events lasted over 2 hours. The UPSs couldn't even support the servers so 3 of them immediately lost power at every event. So all the servers are at site C and I trunked the fiber connection to maintain my secure network and allow the servers to live in a real server room.

    Figure 1 Current - The diagram is kind of wrong, there is a tunnel between the sites for compliant data. Data that doesn't need to be encrypted isn't. There are two different networks.

    Figure 2 Goal - Site B is under construction and should be completed in spring. It's a remote site so only microwave will reach them, but it's close to a CenturyLink fiber so I can demarc fiber there. It will also be a real server room with 100amps A and B service, backup generator, redundant air conditioning ect. Then tunnel it back to the Switch datacenter where I can have access to multiple ISPs for redundancy. I rent 2RUs from my lower level ISP who has a direct L2 connection to many ISPs. With the current AT&T tunnel, that means I'd have two fiber paths that are 100% separate until they converge at Switch at two different sites with a ring to bring them back together so I'd be pretty prepared for any kind of physical failure.

    My biggest consideration is 140-2 compliance.

    MACsec seems like it would be the easiest way to handle this. My understanding is if I get a switch that is MACsec capable, like the c9300, it has very little to no performance hit while still provided encryption from switch to swtich. But I'm misunderstanding the differences when discussing FIPS compliant modules and services. From what I've read, MACsec is non-compliant.

    For the Cisco 9300, here is a report from Acumen stating:

    Acumen Security confirmed that the following features leverage the embedded cryptographic module to provide cryptographic services for SSH, TLS, IKEv2, IPsec and SNMPv3.

    • Session establishment supporting each service,

    • All underlying cryptographic algorithms supporting each services' key derivation functions,

    • Hashing for each service.

    • Symmetric encryption for each service.

    From my understanding is, MACsec is not any of those services. However, MACsec still uses the module these services use. When I'm reading directly from NIST it doesn't talk about the tested services my Acumen. It does, however, say that AES-GCM-128 or 256 is a FIPS algorithm which is what MACsec would use. My opinion of the matter is based on the information on NISTs website I should be good.

    I really don't want to reach out to my security officer or auditors to ask questions unless I know 100% that they'll say, "No problem, send me an updated map for me to approve and we'll be good." When I compared the cost of the 9300s to their performance, I don't see me hitting their performance limit anytime soon which is why I don't find it necessary for something like an ASR or Nexus. I figure why not just get the needed distribution ports if the c9300 gives me all the routing protocols I need and performance.

    Right now c3650s are the core route and distribution. Static routes to 5525x HA or 2110 HA depending on the network, and a 3900 or 1941 on the edge depending on the network.

    Past the firewalls, I don't need the encryption through the tunnels. From there, any CJI data will be encrypted with a site-to-site with the firewalls.

    As for the equipment at Site A, a c9300 is overkill. I think I can just take some of the c3650s I replace, stack 2 together so 1 tunnel drops off on switch 1, another tunnel drops off on switch 2, and LACP back to the provider from each switch so I can say there is redundancy at every conceivable level. It's just routing the internet traffic which is going to be like 500/500 at most, and the TSoIP traffic or ~1750mbps at full capacity which is just nothing.

    Next is, I've never used firewalls in an active/active configuration. Can I put one of the 2110s at site C, the other at B, and just route traffic to the pair? I know in active/standby that wouldn't work but I just don't know enough about active/active. Or do I just keep the 5525x pair at C and the 2110 pair at B and ospf from the c9300s to two different firewalls?

    Does any of this seem reasonable or am I just an idiot?

    Edit: 2100 doesn't support clustering. I could have sworn I saw Active/Active on the license summary. So I'll have to think about that some more. My environment doesn't have the need for something larger than 2110 performance and redundancy is important. I just don't like the idea of 1 tunnel being dead in the water all the time because a firewall is on standby.

    submitted by /u/Dadarian
    [link] [comments]

    Requirements for transit routers.

    Posted: 02 Feb 2020 01:48 PM PST

    I was looking at white label switches (specifically the Dell S series) recently and I was told by a collegue that they wouldn't be suitable due to control plane protection/policing hardware limitations.

    I think he said something like the QOS is applied before the ACLs (can anyone speak to that, or elaborate on that topic? I'd love to check out the various chipsets against this claim: Trident 3, Tomahawk 3 etc).

    It got me thinking. What requirements differentiate a transit router from a normal internal router?

    A certain amount of buffering, control plane protection, obviously a decent size TCAM, line rate PPS?

    I'm looking to build up the requirements and then source hardware so I can do a write up.

    Disclaimer: I run a small transit network that takes a default from our ISPs but exchanges full routers with our local IXP so I don't need full tables.

    submitted by /u/200tribbles
    [link] [comments]

    Fortigate VIP's don't show up as options when making a policy

    Posted: 02 Feb 2020 04:05 PM PST

    Hi All,

    I am teaching myself some networking in my home labs (hopefully this doesn't violate rule #1). I have two separate networks, one with a 60D and one with an 80C. On the 60D, I am running 5.2, on the 80c, 5.6.

    The 60D, I was able to setup a policy to allow all external to reach the VIP I made that port forwards port 80, to an IP in the DMZ.

    Went to go do the same thing on the 80c, but when I get to selecting the destination, I can't for the life of me get the new VIP I made there to show up in web gui or from cli. I have done some googlefuing. and found two posts that match identical symptoms, but in both cases they seem to be the result of upgrading a previous verision of fortios while trying to keep the prior config.

    https://community.spiceworks.com/topic/1973368-fortigate-virtual-ips-not-selectable

    https://forum.fortinet.com/tm.aspx?m=152197

    Mine is a new config from scratch. Another post https://forum.fortinet.com/tm.aspx?m=152731 they mention disabling central nat. Tried doing this, but still no luck.

    Things I have tried doing so far:
    Remaking the VIP (numerous times).

    Ensuring the interface the vip is bound to matches what I am trying to declare in the policy (dmz interface).

    Any other suggestions for things I can try?

    submitted by /u/cbtboss
    [link] [comments]

    What do you do for dhcp in large enterprises?

    Posted: 01 Feb 2020 07:11 PM PST

    Currently our environment around 80k users, 124 locations, and we use windows dhcp server. Curious what other large enterprises would use? I worked at a smaller firm before we we used infoblox with great success.

    submitted by /u/TrumpsDump2020
    [link] [comments]

    Fibre Channel VSS switches ?

    Posted: 02 Feb 2020 08:01 AM PST

    Hello, i have a noob question, can dedicated fibre channel switches works as VSS, or there is no such thing ?

    submitted by /u/xMNDarknessx
    [link] [comments]

    Pre defined virtual Network

    Posted: 02 Feb 2020 10:24 AM PST

    Hello, everybody,

    i am currently looking for a test virtual network for GNS3, which is already up and running. And preferably has different hosts and services. I already looked at the labs on GNS3, but they are all too small. The best would be a network with 200 clients and more.

    Does anyone have a suggestion?

    Greetings

    submitted by /u/konff
    [link] [comments]

    Cable (using Static IP) internet with LTE backup (using DHCP)

    Posted: 01 Feb 2020 05:04 PM PST

    Looking for a device that can connect to both a cable connection ( that has a static IP assigned ) and a LTE 4G cellular modem. The device should be able to automatically fail over the clients from cable to LTE cellular modem. Most of the stuff i find on the internet can do this, but they dont handle a static IP assignment on the cable connection.

    If the cable connection with the static IP drop and the system switches over to the cellular connection, its ok if the IP is dynamic on the cellular side. I am just trying to ensure the devices within the network maintain an internet connection.

    One device that seems do to this is the cradlepoint ARC CBA850 but i cant tell if it can handle the static IP on the cable side.

    There was also mention of using a Mikrotik solution but i didnt see any Mikrotik routers with built in LTE modems.

    I have a server handling DHCP and DNS internally, so those features are not needed.

    submitted by /u/BitOfDifference
    [link] [comments]

    No comments:

    Post a Comment

    Fashion

    Beauty

    Travel