• Breaking News

    [Android][timeline][#f39c12]

    Tuesday, January 12, 2021

    No, POE doesn't work like that Tech Support

    No, POE doesn't work like that Tech Support


    No, POE doesn't work like that

    Posted: 11 Jan 2021 04:32 PM PST

    First post from a long time lurker...

    So I work in Tech Support for a variety of WiFi and network systems (hotels, dorms, apartment buildings, store chains), and that brings a lot of stories. Nearly all get a laugh, but living through most killed part of my soul and what's left of my faith in humanity.

    =/\= Obligatory TLDR Warning =/\=

    This story takes place a day after Hurricane Michael swept through Panama City, Florida. So we weren't expecting a call from any store or hotel in the area, power was out all over as far as we knew. So, it was a little shocking to hear from an employee from a higher end men's clothing store call in about network issues, stating he needed the store up and running.

    Here's what followed (me is me, E is employee, RG is Regional Manager):

    E: "Hey. Yeah, look we have clients coming in today for pick-up. I need the network back up."

    Me: "Clients? Sir, are you sure? Didn't a hurricane just come through?"

    E" "So? I have a business to run, and these are high paying customers. Help me get the network up."

    Me: "I'm happy to try, if I can. Does your store have power?"

    E: *scoffs* "No, but the network doesn't need power to work."

    Me: *eyes wide, mouth agape, looking at my phone with a 'WTF' expression* "Um...excuse me? Sir, could you repeat that?"

    E: "I said the network doesn't get power, not from the power company anyway. You should know this, being Tech Support!"

    Me: "Sir, I can guarantee, beyond the shadow of a doubt, that the network absolutely gets power from the power company!"

    E: "Do you think I'm stupid?" *I thankfully held my tongue* "The network is powered by the internet, like home phones are powered by the phone company. You know, POE? Power Over Ethernet? So help me get the network back up!"

    Me: *SMH while my soul dies a bit more* "Sir, any and all phone companies get power from the power company. The internet is powered by the power company. POE is for devices designed to get power from a main source that's plugged into the wall. Getting power. From the power company."

    E: "You just don't want to help me! What am I supposed to tell the customers who are coming in for their clothes?!"

    Me: "Do you honestly expect people to come out after a major hurricane for clothes?"

    E: "It was just a storm, they'll be here. Now help me get the network up. I have a flashlight and in the back. What cords to I unplug?"

    Me: "Sir, respectfully, but I can't help you. There's not power to the store, and unplugging anything will be pointless until power is restored."

    E: "YOU'RE REFUSING TO HELP ME?!"

    Me: "Yes." *Gods, it still feels good I could say that* "Not because I don't want to help you, but because I can't."

    E: "Get me your Supervisor. NOW!!"

    Me: "Yes sir, of course."

    I place Genius on hold, and turn to my boss (her desk is next to my station and heard the whole thing). She not once looked up and just said, "Nope. Hell no. I won't waste my time. Tell him to call back when there's power."

    Needless to say that part of the conversation went smashingly. He hung up in a thrash of anger and swear words, and I noted my Ticket accordingly. Bosslady just shook her head as I snickered about the level if smart the employee lacked.

    Flash forward 20 minutes later, and I was the lucky winner of getting the call from the Regional Manager of E. I answer my standard greeting and I could hear the sneer of the RG on the other end.

    RG: "Oh. Good. Now I don't have to ask for you. Would you mind explaining to me why am I getting a call from E explaining that you refuse to help him?"

    Me: "Did E tell you why I refused?"

    RG: "Yes. He said you refused to help him get his network online after he called in informing you it was down."

    Me: "Did he also inform you that the store is without power, and according to the latest reports, power could be down in Panama City for the next 48 hours?"

    RG: "Excuse me? Wait, what?"

    I advise RG of the conversation, and my Supervisor refusing to speak with E due to the lack of power at the store. I swear, you could hear RG slumping shoulders over the phone.

    RG: "He actually thinks that's how POE works? You've got to be kidding me!"

    Me: "Sir, I wish I was. As I told you, I tried to explain it to E, but they refused to listen. He demanded I get the network up due to people coming in to pick up their clothes."

    RG: "He literally expects people to come in? After a hurricane?!"

    Me: "Yes sir, but he said it was 'just a storm'."

    RG: "Well, I'm over in Biloxi right now, so I'm aware of the full extent of anything there. Look, let me get off here. I'm gonna call E and have a talk with him."

    The call ended and I was kinda sad I couldn't listen in on that conversation. A couple days later, on my off day, E called in and got his network up after the power was restored. My coworker who took the call messaged me about it and said E sounded like a kid who's been grounded for 2 months, utterly defeated. One of the few times I wish I could have been in the office to take a call.

    submitted by /u/ELS314STL
    [link] [comments]

    The satisfaction of supporting unsupportable systems

    Posted: 11 Jan 2021 02:17 PM PST

    Hey y'all!

    Let me just start off by saying I love my job. I have a lot of freedom, I work from home, primarily on a manual ordering process for internal IT, but also on some tech support for our automated ordering portals.

    My company has had a problem with sunk cost fallacy for years. Instead of throwing out the trash and rebuilding something that works, they've been tacking on system upon system to patch holes in our requirements. That's how we've ended up with four ways of ordering internal IT stuff: A manual process involving emails (to me nowadays), an order portal, a newer order portal, and emailing servicedesk. Understandably, the users are incredibly confused, and rarely know where to order what.

    Luckily, at the start of summer last year, my boss got the approval to implement a new system from scratch which will replace all these older systems and solutions. It will also end the hard system separation between our ordering tickets and our incident tickets, allowing all vendors and servicedesk to work in the same system, yay! Implementation is planned to be finished by Q2 this year, so by the end of the year we might be half done.

    Enough backstory. As a result of reorganization after reorganization, as well as normal career moves, every single person who worked with the backend of the oldest order portal is now gone. Additionally, there is no documentation on our specific implementation. All we have is a manual for the system itself, which while helpful is also an enormous infodump that doesn't tell us anything about our custom workflows.

    Because of all this, we were in a situation for a while where literally no one supported the tool. If there was an issue, shrugs were had. The team my boss heads up is responsible for it along with all the other tools like it, but they were focusing on implementing the new and replacing the old, not on keeping the old working while implementing the new. After almost burning out due to workload in the fall, I've ended up with a revised task list: My manual ordering department, and best-effort support for the old order portals.

    For the newer (but not newest) portal, all I have to do is answer easy questions and redirect harder ones to our external consultant. For the oldest one.. I have to get creative.

    Little to no documentation. Antiquated UI. Convoluted workflows. Basically no handover. But hey, it's best-effort support, right? If I can't figure it out, and it's urgent, we'll push to implement that feature/form in our new portals, and handle it manually while waiting for that to be done.

    Today was my day. A form stopped working. Big panic. Chaos all around. The entire country was a warzone. I'm definitely not exaggerating, and the reality definitely wasn't that one person pinged me on skype asking if I could look into it for them. Normally, I'd ask for a ticket, but since I'm only doing best-effort support I figured I might as well take a quick look and see if it's something I can fix quick and easy before asking for that. I like the guy that messaged me too, so he got a free pass there.

    Clicking around, I found the workflow. I found the XML response from the system it failed to create a ticket in. I found the error, it was trying to assign to an outdated group that was removed over Christmas. I found where that group variable is defined. I changed it. Saved. Test again?

    Failure. Why? Oh look, there's a deploy button I missed. What's the worst that could happen when you press unknown buttons that are named Deploy in a system you haven't been trained on, that no one understands, after changing a variable that hopefully does what I want it to do?

    Fortunately, nothing exploded. The user resubmitted again, and it worked. Problem solved. I'm a literal god.

    User informs me that if we end up in a bar together, he'll buy me a beer. I don't drink, and rarely leave my apartment, but it's a nice sentiment.

    TL;DR: When you don't know what you're doing, poke at things until it works. Hope it doesn't explode.

    And yes, I know that I did kind of know what I'm doing, enough to understand what various terms in the UI of the tool meant and where to click to probably get it to do what I want, but still!

    submitted by /u/Lilyliciously
    [link] [comments]

    Yeah, maybe that's why you shouldn't close that door?

    Posted: 11 Jan 2021 03:02 PM PST

    Hello there,a little story about what happened a couple of months ago i thought you guys might enjoy.

    For the background, i work as a tech for a small IT company, our job ranges from outlook oopsies to building whole IT infrastructures. We have clients in many sectors and of various sizes.

    The story is about an incident that happened at a large (ish) plant (not going to say what kind of, but if their servers go down human waste will litterally and figuratively hit the fan). Also they are located about 100km away from our offices.

    No cast.

    The techy bits (important later): This factory's infrastructure is divided between two server rooms, each room has 3 servers running virtual machines, a bunch of switches and other equipement. One of the server cabinets is located in the main power room for the factory and also happens to contain the sole SAN (i know). This place gets HOT with all the electricity running through it.

    So, up until sometime this summer the server cabinet had a functionning AC unit sitting on top of it. At some point it started leaking oil into the cabinet and ended up killing one of the three servers. Decision's been made to turn it off. We opened up the cabinet and called the AC company in order to get some less shitty AC in there (we are still waiting). We've also explicitly ordered people to keep the cabinet opened and at least ventilated. Did i mention the location being somewhat on the warm side?

    One day i've been performing my daily tasks when i got a call from the client asking me to add a user to some groups in the AD because he needed to access files yada yada, no problem. While i was looking through the directory trying to locate said user, my screen (the remote desktop) suddenly froze. Ok it happens, just reconnect and you're good to go. Biiiiiig nope.

    The VPN was still working, so they had internet, i could see their wireless access points being online, people were connected to the wifi. The physical servers were responding, the switches seemed to be online too but no response from the virtual infrastructure, not good.

    One thing that did not respond to ping was the SAN, a mighty dell compellent SCV3020 with hybrid storage, expensive AND complicated piece of equipement. For the context, i did not install this thing, i am not trained to operate it outside of monitoring and i basically do not know how that shit works. The guy who did all that left the company last year and since i'm the closest thing to senior tech that remained i inherited the whole mess.

    After i've answered at least a dozen panicked calls from the client i called my boss and told him some fuckery was afoot. He agreed we (and by that he meant me) had to go onsite and uravel that mystery. It was 3'o clock in the afternoon and it takes at least an hour when the conditions are right to get there, meaning overtime. I packed my stuff and got in my car.

    When i got there the clock showed 16:30, friggin' traffic jams. I was welcomed by the main onsite tech who was technically on vacation and has been urgently recalled to supervise the operations while the computers were down. Off to the warm place we went.

    My jaw nearly dropped when we arrived near the cabinet, the doors were closed shut, the fans inside were louder than the rest of the crap running in the room, i've never seen that many red lights on a pile of expensive hardware. What the fuck?

    When i opened the front door a wave of hot air hit my face, you know like when you open the oven when you want to check up on a cake? After a couple of minutes the fans calmed down while i was trying to see if the SAN wasn't actually on fire. It wasn't but it wasn't exactly working either. "Fuck, i'm going to spend the night restoring backups on the old san, am i ?" I thought.

    I pulled the service tag tab out and rang dell support, no way i'm touching this and we're paying for pro support anyway. Over a couple of hours the guys from dell helped me to get everything up and running again, actually checked with me if the esx servers reconnected to the SAN properly and i went home around 22:00 after i made certain every VM was performing as it should.

    Turns out the plant had an inspection, some idiot decided that every door that should be normally closed HAD to be closed, non ventilated server cabinets in warm places included. They've even decided to disable the temperature sensor because it was beeping loudly. I just wanted to kill that guy and paint a message on the door with his blood. I warned them not to fuck with the cabinet again, because if that happened i would not run there on my off hours to clean their mess.

    Since the AC stared leaking i make a point of visually inspecting the cabinets at least once a week. Guess what happened two weeks later? That's right, they've closed the door again... I think i'll remove it till we get (if?) that new AC.

    submitted by /u/Glasofruix
    [link] [comments]

    Left hand, right hand, underhand, back hand... Who's doing what?

    Posted: 11 Jan 2021 04:18 PM PST

    u/Lilyliciously's post reminded me of something that happened about 10 years ago.

    At this point, I was a systems analyst, second line support monkey, and Lotus Notes designer. Yes, really. I'd been sent on the official Notes Developer Bootcamp and everything (I might still have the training manual somewhere). For the most part, this aspect of my work involved managing updates to the front page of the "Intranet" (the default database that appeared when you opened the Notes client), building things at the whims of users, and maintaining the existing stable of Notes applications.

    While I was a designer, I wasn't an administrator. No, that was handled by my colleagues in Infrastructure. There was a constant low-grade battle over whether I was allowed to make changes to the public names & addresses book because of this divide. They said that I wasn't to be allowed editor access to it, because I could break something. I said that I wasn't a bloody idiot, and that in order to create access groups to the applications that I was building (as part of my job), I needed to be able to edit records in the NAB. Eventually, we compromised - I had to use a different account to gain access to the NAB. Fine - I needed that one to push applications live anyway, so NDB.

    One of the more interesting applications that I got to know was the public schedule summary. This did a weekly run-through of the current IT department, and prepared a day-by-day listing of where everyone would be, according to their calendars. We were encouraged to set up all-day events that would say which site we'd be working at, and these would ensure that we always appeared in the summary, in case we didn't have any meetings or similar.

    In late 2009, we got a new IT director. The old one (who was old, as well as long-serving) had been directed to implement Ellison's cash cow (an almost completely terrible fit for the company) to replace our existing manufacturing ERP, as well as take over purchase management and the General Ledger from our sales order processing ERP. He didn't particularly want to retire, but more than this he didn't want to be held responsible for the massive decline in user experience that would be forthcoming. So he resigned. New IT director, fresh from the UK office a leading purveyor of brown fizzy drinks, decided to make changes. He fired the old director's direct reports (good idea in the case of one; terrible idea in the case of the other), and brought in some cronies. As they do.

    One morning, I was doing my usual stuff, when I got a panicked call from a usually calm colleague. The schuedule summariser had nothing in it. Odd. I checked the logs, and saw that the weekly job had run as planned - bit faster than normal, but no errors - Wait. How fast? Normally, summarising the week's movements from 40-50 calendar applications took a few minutes. This started and stopped in a couple of seconds. Not Good.

    I delved further into the rabbit hole. The job ran a simple subroutine - pick up the IT group from the NAB, step through the entries in its members field, and process the calendar for each one. Fine. I opened the NAB and looked at the IT Group. Well, I tried to. Then I switched accounts (*grumble grumble*) and tried again. OH.

    Remeber I said that the new director wanted to make changes? One of the changes that he'd implemented was to ensure that, as much as possible, all flat group listings in the NAB were replaced with hierarchical ones. He didn't want changes to have to be made to multiple documents to ensure that they were correct - he wanted individual names in as few documents as possible. I have no idea why this was so important to him - he certainly never did the work! However, as was his will, so was performed. By Infrastructure. They didn't think to ask Applications Support if any of the applications that they supported would be affected! (In fairness, as I hadn't built the thing, I wouldn't have know that it would be affected. Would have been nice to be asked, though.) I pointed out that this change had b0rked a key department resource - could we please have a single flat group listing so that it would work again?

    No. No, we could not. I was told to make it work.

    Now, I like a challenge, but this was a lot for a Monday morning. It was soon after this that I started drinking lots of coffee.

    My problem was that there were multiple levels of the hierarchy, and while some levels contained either people's names OR other group documents, there were a few that contained both. Bugger.

    My solution was to write a function that would recursively call itself for each passed group name, until it got to a person's name. Then it compiled an array of found names and passed this back up the stack until it got to the top, and then proceeded as normal

    TL;DR: New director insists on change for change's sake, colleagues don't enquire about the possible effects of such changes, and I get to clean up the mess.

    submitted by /u/KelemvorSparkyfox
    [link] [comments]

    No comments:

    Post a Comment

    Fashion

    Beauty

    Travel