The $2000 light bulb. Tech Support |
- The $2000 light bulb.
- Wireless Access Point is lost. I am lost. Everything is lost.
- My client is currently learning a lesson on proactive IT budgeting and management.
- Getting the 3rd degree from Ms. Two Masters Degrees
- "It's your fault!"
- Printer from hell
- Users credentials don't work, you must have messed something up
- Printers & You
- Incomplete Password Policies
- About password policies
- Of Login Errors and Deceptive DNS
- Call Center Tale #3: The Move, Part 3
Posted: 28 Jan 2020 02:11 PM PST Hello folks, Sorry for the long delay, I've been busy(lazy) and haven't written another story like I said I would. Long story short, I fix xray machines and other imaging systems. As the title says, this one is about the time I charged $2000 to change a light bulb. Now this one might make a few folks mad because of the price of such a simple job (and trust me it was simple). Let's do this. Just another day in paradise about 5-6 years back trying to keep my head above water and not get annihilated by an angry tech. I get a call from a site with no service contract. No contract = not the highest of priorities. So I do my normal thing (I was a newbie back then and didn't have a built up reputation with my customer base) and tell them I'm a bit slammed and will be there in two days as that's the earliest I can get to their site. It was a bit remote and we were low on manpower. That did not go well. I was immediately told that they were an important customer (they weren't) and that if this is not resolved ASAP they will not do any business with my company again (they haven't in the last few years anyways). Even though I was a newbie I had already learned how this went and I will flat out admit I am a little spiteful at times. So I deflect and tell them I will see if I can get there same day (overtime will happen) and take care of them. I already had the bulb and it was a simple 5 minute swap. The issue was the 2 hour drive to get there and I was already in the middle of a job with a significantly larger customer that was already fairly demanding. So I did the logical thing of calling my boss for advice. I was told to handle it how I thought best. Thanks boss. So I will do as I was told. As I said earlier I am a bit spiteful at times. I also have a good bit a leeway on what customers get charged for out of contract or no contract service. This is sometimes reflected on how said customer treats me. I like to take care of my customers and build up the reputation that I do care (which I do). But at the same time my company wants me to bill for the exact time I work on things. So when a customer is not being the kindest of people my spitefulness comes a knocking. I finished my job at big customer around 6pm. We already in overtime hours so this just got pricey. I call no contract customer and ask If they want me there tonight to fix the system. Let me just say that while the system was not 100% usable they were still able to use it for patients. X-ray techs will know what bulb I'm talking about. I'm more of less told to get my ass there and be useful. Yeah I'm really liking this person and was warned about them prior to this encounter. So I haul ass in the man van to their site and swap out the bulb. Gave them the normal spiel of getting their managers email and saying I'll send the bill the next day blah blah minimum time plus travel. They didn't like the bill. As the title says it was $2000ish. I got yelled at and they demanded my manager which I happily obliged. They paid the bill in the end and that's how I charged 2 grand for a light bulb. [link] [comments] |
Wireless Access Point is lost. I am lost. Everything is lost. Posted: 28 Jan 2020 12:43 PM PST Short but Network Operations: We have a downed Wireless Access Point, hostname USCA#########.us.hellhole.com; we tried to check switch port but MAC Address doesn't appear to match. Switch is hostname USNY#########.us.hellhole.com. Could you visually check connection? Me: Could I visually check the connection between a switch in New York and a WAP in California? Network Operations: Yes. Me: Where would you like me to start? New York or California? I charge by the hour, by the way. Network Operations: Start at the WAP. Me: Ooookay. Quick question, though: I pinged the WAP and it's not down? Network Operations: We meant this other WAP, hostname USCA#########+1.us.hellhole.com. Me: Are you sure? Network Operations: Yes. As per the previous notes, please check the connection of USCA#########.us.hellhole.com. Me: Closing this ticket because USCA#########.us.hellhole.com is fine. Opening a new ticket for USCA#########+1.us.hellhole.com as this is getting confusing. Network Operations: Sending old ticket back to you. Issue is not resolved. Me: Oh, there are many unresolved issues here... [link] [comments] |
My client is currently learning a lesson on proactive IT budgeting and management. Posted: 28 Jan 2020 08:19 AM PST I work for an MSP, assigned to a medical practice full time since early 2019. From the beginning it's been a technical battle to revamp their infrastructure to a point that resembles industry standard practices. One of the biggest logistical battles between the client decision makers and our MSP has been finding a reason for upgrading older hardware and spending money. Most of the practice runs off 4th Gen or earlier Intel i-Series CPUs in Dell Optiplex towers. That 'most' referring to over 50 of their near 200 total devices. We have pushed upgrades as issues have occurred but the client only ever wants to upgrade the boot drive and OS because "if it boots it works" or something like that. I like to choose my battles wisely, so I abstained and worked around their concerns/priorities mostly focusing on updating the VM setup and deprecating older Windows server installs that aren't supported. We recently had our IT budget meeting for 2020 where I laid out the critical expenses for the year ahead of time, something we are making a central part of our IT plan going forward at the practice. We discussed for over three hours the requirements of their system on a per user basis, the capability of the current hardware, our suggestions on efficient upgrading, our reasoning based on anecdotal evidence and industry standard practices, and even specifically explaining the slow downs certain users get from the level of multi-tasking the practice asks them to do on these old machines. We presented our pricing (at a steep discount due to volume) and with the acceptance from the management handed over the documents for approval. I've already received multiple communications via email regarding additional justification for the expenses over the past two weeks (apparently three hours of voicing concerns and taking notes wasn't enough). Today I got notification of a doctor having a very urgent issue regarding his PC hardware. He hasn't tried to use the disk drive on his 7 year old Pentium series Optiplex in some time, and come to find out it is broken when he tried today! He was quite concerned due to time constraints on his work when working with patients and other doctors/teams. The client stressed this urgency and concern to me; I must make this disk drive work for him! The pressure is on everyone at the practice! Ahh! (/s) I responded professionally that we suggested his hardware be upgraded mid-2019 when his Windows 8 image failed, but the request was denied due to the PC being considered acceptable age by the practice managers. I also noted that we re-submitted his PC for replacement a few weeks ago in the budget meeting, due to hardware age which would only cause issues down the line. I noted that the practice still hadn't provided us a final verdict on the deployment, despite our assessed need for the new hardware. The doctors at our practice can be a little hard to deal with when time is of the essence, and I am hoping the management team is putting 2 and 2 together on how to avoid these situations going forward. Best of luck out there, hopefully you are listened to more than I am! TL:DR -- Client refuses for years to have a planned deprecation of hardware, is carrying 10s of out of warranty towers that are slowed to a crawl. Despite several points over my last year with the client to urge them to upgrade this part of the infrastructure bit by bit they see the reasoning as insufficient. One of those PCs breaks in a time sensitive way today, all hell breaks loose, and I sip my water from my desk knowing I predicted something just like this would happen while they scream urgency through email. [link] [comments] |
Getting the 3rd degree from Ms. Two Masters Degrees Posted: 28 Jan 2020 05:22 PM PST Back in the late 1990's I was a field tech for a company owned by Big Blue and a major (local) retailer call comes in . It's 5:00pm, Christmas season so I call my boss who was ok with OT. We had a 2 line device called a Motorola Portable Terminal that we can look up parts and see any troubleshooting/errors the call center had taken from the customer. I look at the error and it is an issue with the token ring beacon, basically somehow the store token ring network has a break in it. Google token ring then chuckle. I called the store and asked for the person who called us to see if we can fix it over the phone otherwise I won't be there for two hours (I was near Boston). I proceed to get yelled at that she has two masters degrees and get here now.I walk in 1.5 hours later to see users abandoning full carriages and walk behind the registers to quickly check before using the diagnostic floppy and see a register with a scanner plugged in the number one port meant to go to the network and correct it. I hear a beep meaning the store is back up and 16 registers are downloading data to the back room ... One at a time . I spoke to the store manager who said Ms. Masters degrees had been replacing a scanner but denied it when asked and he is in the process of putting how much $$$ it cost the store [link] [comments] |
Posted: 29 Jan 2020 03:24 AM PST This little story came to an end just a couple of hours algo: I work for a very big company, doing L3-4 support for a very particular tool that has to do with data protection. This particular tool is a bit picky regarding Linux kernels, and you always need to check compatibility before updating a kernel distro. Well, as it happens 95% of the time, they didn't check before updating... This meant a high priority incident because the data became inaccessible. A few hours of work updating the tool and reconfiguring, got everything working again. Fast forward to my next shift, and what I see in the queue? Same incident, higher priority, and a particularly nasty email escalating to my boss's boss. Delightful... I get on the bridge, and spend a couple of hours listening at how this tool is garbage, how everything we do is not enough, and that someone is going to be held responsable for all of this... All this while trying to troubleshoot what the hell happened (meaning "what did they do") that made the tool break again. So after asking like 15 times what did they do after getting the tool fixed the night before, restarting for good measure, and listening many times how my ass is on the line, I hear something that makes me very happy and angry at the same time: "we just stopped the services and rebooted the server to check for <tool B>..." Me: "That shouldn't be a problem, the services for this tool start automatically" Bridge: "Oh, no, we set it to manual..." Me: " So you stopped the services, set it on manual, rebooted the server and didn't start the services again?" Bridge: <deafening silence for 45 seconds> Bridge: "We started the services and everything is working now" Me: " Great news! So, just to be clear, this almost 24 hours downtime had nothing to do with tool, and it was all because a human error?" Bridge: "Thank you for your assistance" <click> I'm totally writing a beautifully worded email as a reply for their kind words to my bosses. [link] [comments] |
Posted: 28 Jan 2020 09:26 PM PST Ok, cut to 2005 or so Typical small business, 400 seat network in old factory buildings I'm briefly consulting at. Hell desk guys have worked on this issue for a while, can't figure out why an HP 4250 laser printer is having so many problems. Random fonts get bold in the middle of a page on some documents but not others. Entire lines drop out occasionally, again at random. Other times, everything is fine. Sometimes the printer will turn on, test pages print fine, then an hour later the random weirdness starts again. They ask me to take a look with them. We try new toner/fuser, zip, nada. We try swapping it for another identical 4250 printer, same exact thing happens. Try a new HP directjet card, same thing Swap electrical cable, surge protector, network cabling, same result I decide to grab the printer after hours and bring it to our build lab. And of course, immediately the bloody thing is printing perfectly. Engage sleuth mode. Arrive at work next day with no good answer, but call up our facilities person on a hunch that something environmental on that floor is causing it. 30 minutes later we discover an electrical sub panel behind a wooden cover (illegal) at the back of a closet that feeds power to all circuits on that floor. Idiots who ran the wiring used undersize romex for the feed to that particular area. Slap an inductive amperage meter on the wire, turn on the copiers, PC's, etc, and as soon as the fuser warms up, the amperage is exceeding what the wire can handle and you can feel the wiring getting hot. Solution> run an extension cord 50 feet across the hallway to another wall plug until new circuit gets installed , problem solved TL:DR Proper power can solve many an issue [link] [comments] |
Users credentials don't work, you must have messed something up Posted: 28 Jan 2020 05:51 AM PST Sorry for Wall of Text, TLDR at the bottom. $me is /u/Twilightoutcast $TL is team leader $User is point of contact on the ticket $Newuser is a new user account that was just created I work at an MSP that handles a lot of different companies, my team handles about 30-40 of them. Yesterday I got a ticket passed back to me by my $TL via IM. I pull up the ticket and, lo and behold, $user updated the ticket roughly saying "Can't log into $Software as $newuser with provided credentials, tried every combination of characters. There must be a step missing on your side, again." I look at who worked on the ticket and it was me, so I know I didn't mess it up (I uncommonly get tickets from $TL that other teammates have messed up, I always check who worked on it so I can I notice that $Users put "tried every combination of characters" which is funny because we've had issues in the past with this specific contact not able to differentiate lowercase L and uppercase I, as well as uppercase O and zero. I've specifically taken steps to avoid this by using password generators that give me very easy to read passwords. Fang744Skull is one I just generated as an example. Anyways, I go on to double check my work. The $newuser accounts were setup for Windows sign in, email, and some SQL database software they use (third party, all we do is handle account setup, if there's an actual issue with the software or database, then it's not our problem unless the people who sign my checks tell me it is). I remote onto one of the servers and check Windows credentials, login no problems, check email, again no problems. Nothing unusual here as when setting up the SQL accounts there's a lot of checkboxes to enable and it's pretty easy to miss a few. While remoted onto the server I open the software, type in username, copy/paste password, boom I'm in. This software has like 3 or 4 different modules to log into so I check those as well. Boom, boom, no boom. I double check the last one I'm trying to log into and this one is used primarily for the head honcho's at this company so I don't think it's a big deal. This is about the time it dawns on me that $user doesn't know what copying and pasting is, or how to spell for that matter. See as I have some free time (more time than I did to write this post) I decide to be petty and spend more time than I'd care to admit finding a website to transcribe the password into phonetics (i.e. Alpha bravo charlie 1 2 3) so I could email it to the user and they would be able to have the exact spelling and capitalization of it (provided they knew how to read at a high school level anyways). After finding one and getting the password transcribed with proper casing and everything I think to myself this might be a bit too petty to email to $user. So I look at my $TL and ask "Hey would it be petty if I sent $user the phonetics of the password so they knew how to spell it or is that overkill?" He chuckles for a minute then responds "Yeah that'd be kind shitty but funny. Probably not a good idea." After doing some talking with $TL I decide to just tell the user to copy and paste the password into the password field as that's what I've been doing this whole time. I already have my notes on the ticket written up but I reformat them so they don't look like Elvish to the $user when I send them to him in an email. My notes basically went something like this: "Was able to log into everything without issues. Please try copying and pasting the password into the password field. If you need further assistance please let us know. Thanks, $me" Set the ticket back to it's closed status without sending a notification email to the $user that it was closed (I hate sending them an email and then sending another automated one letting them know it's done when my first email said that). I have the ticket pulled up now and there's no updates from the $user, so it seems I've helped them discover the magic of Ctrl+C and Ctrl+V. Tl;dr Point of contact couldn't login as a new user because they either can't spell or don't know what copy/pasting is. Edit: added some details and edited for clarity [link] [comments] |
Posted: 28 Jan 2020 08:13 PM PST Be me, an in-house applications support person. Since I have 12 years of IT experience, I get chosen to find the new office printer. Mind you this is an office of 35 people give or take at any moment. The printer that we currently use has gone offline, and does not respond to any of my willful tasks. Even backdooring into it does not succeed. This is a printer that has been used for the past year or so, and has shown signs of degradation. So since the printer is dying, guess who gets to find a new one? And it can't be just any printer, it HAS to be the all in one kind. The same one where you can scan and fax all together. The printer will be working on Wi-Fi, where people are printing usually between 2 to 10 pages each, per day. I was tasked to find a new printer for the entire office. With a budget of $200. I've made my comments, to the upper chain, in regards to the printer not being a suitable find, or any printer to be a suitable match for the office given office print behaviors, and of course the budget. UM decides to go with a printer made for home office use. Which is usually conducted between a 5 to 10 page print out every other day. Someone save me. [link] [comments] |
Posted: 28 Jan 2020 01:15 PM PST User: A User. Not all that relevant since I only exchanged emails with him. Me: Me of course. Boss: My Boss. Recently promoted and eager to show his initative. Awesome IT Contact, Short AITC: An Awesome and Helpful Co Worker from the other IT Company who was a massive help but got canned eventually. As a prelude, I was part of a Third party Tech Support group. Basically imagine it like this: Company#1 wants Tech Support and Hires Tech Support Company #1. TSC needs to expand its rooster but doesn't want to hire new people in its own company, and hires a third party to provide IT workers (US). Now onto the story:There is a lot of normal stuff you do in Tech Support, most of it is about resetting passwords and I always have kept them to common sense when advising users when it comes to setting up a new password. However there was this case where a user just couldn't set a password for themself. Around the Sixth or seventh try I just knew that there was something wrong and I gave him a temporary password, so that he could work for the day, while I would inquire into what the official company policy on password for the user is so that we could fix this once and for all. Me: Writing a mail to AITC Hey, I have a userID here where we tried to set the password but it kept telling the user the password was invalid. Could I get a copy of the official company password guidelines? AITC: Sure thing! Will take a bit as I have to find it myself, but I will get to you. Me: Awesome. 3 days later I got a copy of the guidelines. 3 of the four following need to be fulfilled in a new password: Alphanumerical letters need to be used, a-z or A-Z, Numbers, special characters as !#$%&, etc. All very sensible stuff that I adviced for the users already as well. Then I read the next line. The password -cannot- contain any 2 following letters in the full name of the user. Hold on. So if your name is Michael Smith then you can not use MI, IC, CH, HA, AE, EL & SM MI IT TH in your password. I kept this email and shared it with my collegues, and things worked fine for us. Not many read the email, and I wager a lot of them simply forgot about it. But I kept it in mind and whenever they had issues with setting a new password I'd inquire if the new password had contained 2 letters of the users name. In 99% of the cases that was the case and advising them to not do that fixed it. Then along comes my my Boss. Recently promoted and eager to show off his abilities he decided to set up his own knowledge base. Me: "With a search function?" Boss: "It will be on the web and everyone will have access an-" He kept going on but I kinda tuned out. It didn't really sound that different to our current knowledge base, which still had entries that were out of date from 2005 in it, but if we were going to fill it, it at least would have current technical stuff in it that we verified. His Knowledge Base was a multi page document on Google Drive. But hey, it might be good and have some worthwhile information from others (it wasn't and didn't) and I can just as well provide him with some of the information as others have shared some of their knowledge too. So I forward the entire email with the password policy to him. A few days later I approach him for an unrelated incident and ask if he could add the Password Policy as well, since he hasn't had done it so far. Boss: Yes, yes, I am on it. There is just so much to do. But I will get to it right now. Me: Alright. Me: ... You missed something. Boss: No, look, I got everything. Capital letters, Numbers, Special Characters. Me: What about the last part where you are not allowed to use two letters of your name in the password? Boss: Thats irrelevant. Me: ... No? Its really not. Boss: Yeah, yeah, whatever, look, I got other stuff to do as well right now. Me: I can not stress how important that part of the password policy is. Boss: I got the important stuff. Me: Not all of it! You just literally have to copy paste it from my mail. Boss: I will do it later. He never did. His Knowledge Base didn't pick up either, as we kept using the company knowledge Base, and it is probably still sitting there on Google drive, unfilled and badly formatted, with wrong information. Edit: Edits galore because I am used to a WYSISWYG editor! In other words, formatting! [link] [comments] |
Posted: 28 Jan 2020 05:33 AM PST Hello TFTS, long-time poster here, first time lurker... No wait, it's actually the other way around. I work as a senior developer in a small business and part of my job is to help the junior developers in their tasks. I always prefer being concentrated on my own tasks, but I never try to avoid helping them so they can get some experience and learn new things. Call it hope for the next generation I guess. $Me = Me So I was having a great time enjoying my coffee and working hard to stay busy on my own work when, unfortunately, my softphone rings with PM on the other end. PM : Hi $Me, Jd has to work on integration between <in-house software> and <cloud-based application>. Please show him everything he needs to connect to the cloud app and show him the part where he needs to work on. $Me : No problem. I'm on it. This kind of exchange was common, since this PM works in a remote office and prefers that someone in the same office helps give briefings instead of remotely connecting and taking twice the time to explain everything. So I jot down where I'm at in my timesheet, save everything I was working on and take my coffee to go help Jd. $Me : Hey Jd, PM wants me to show you a specific part in <cloud-based application>. Jd : No problem, let me open it up. He then proceeds to open up his favorite browser (Brave in this occurrence, but it is nearly identical to Chrome for those who aren't aware of it) and choose the URL to the application within his favorites. Now, this application was integrated with our Active Directory and passed it through Windows Authentication through another internal IIS server. A prompt opens up asking him for his username / password with already pre-filled info. He presses enter and the prompt re-appears. Instead of realizing that the password is wrong, he just mashes enter 5 more times, to no avail. $Me : Maybe you had to change your password? We have a policy to change passwords every n months, so I don't blame him for not remembering every place he has to update it. Jd : Right! I forgot! He then decides to crush my hope in the next generation right there... He just goes to the password field and does what an insane person would totally do : he erases the last character and types in a new one. It worked. $Me : Did you just... I have no words for that. I need more coffee. Jd : Laughs I show him all the rest that he needs to work on and slump back to my desk with a fresh new coffee. I tried to stay concentrated on my own tasks afterwards and kept it through emails if I could avoid it. [link] [comments] |
Of Login Errors and Deceptive DNS Posted: 28 Jan 2020 07:53 AM PST A tale beginning this past weekend and continuing on through today: It was late Sunday night and all through the clinic Not a thing was wrong, even for a cynic DNS was behaving, VPNs were up I arrive early Monday, coffee warm in my cup Prepared for the day, I thought I was Sitting down at a PC, my head slightly abuzz Logon authentication errors greeted my eyes You could almost hear, the muffled internal cries For lo and behold as surely as can be Nothing was working, for all to see Pings traveled merrily through their assigned path Destinations were reached, increasing my wrath The RODC fails to log into itself A glorified brick, up on its shelf The virtual adapter, a workhorse of a fellow To start off today, it was colored in yellow Remove and re-add, it made no change I am no IT god, nor even a mage A call to my boss, the last to touch it The line rang through, no answer f*ck it Local passwords were had, written down in hand Access was denied, as if I was banned The VPN still showed up, yet something was off Is it really the firewall, I said with a scoff It worked yesterday, and for many hours past Factory reset it was, one day before last A tug on the cord, the switch I did flick Back together it went, with an audible click Lights came on in a glorious flutter Back upstairs and towards the clutter Pings again ran through to the other side As the rest became clear I smiled wide Authentication was back and login I did The rest will be easy…..who am I to kid [link] [comments] |
Call Center Tale #3: The Move, Part 3 Posted: 28 Jan 2020 01:54 PM PST When we last left, we were being corporate ninjas and exfiltrating critical hardware under the cover of night. Which basically meant coming in at an ungodly hour and stuffing servers into garbage bags and taking them to one of our cars. Anyone who's moved a rack server knows they're unwieldy at best, even the small 1U ones. Now take that awkwardness, wrap it in a slippery plastic, and move them without breaking anything, because you'll be blamed for it. And that wasn't even the worst of it. Doing this at 2 AM, on the Monday of going live in the new location, and hoping nothing broke just by downing the servers and transporting them. That was worse. These servers still had some critical functions going. We hadn't replicated everything (not enough hardware), so we'd just thrown up our hands, mirrored what we could, and said the rest would come when we moved. Wonder of wonders, the servers made it. Mostly. This was a call center, and we had several different phone switches. One so old, we had a guy in retirement on retainer to come in and fix it when it broke. None of us knew how to fix it. Another one was custom built by a now-defunct telephony company, made for us by some specific requirements for an old campaign. The campaign died, but the switch lived on, warped and twisted by using it far beyond it's original purpose. The thing served about 90% of our incoming calls. And the last one was our most modern: a Fireworx switch that we barely knew how to use, because we could only tinker with it in our copious free time. It served one campaign. A bit more on our primary switch: it used some custom software on the PC to take calls. This software would wait for a call to come in, and would then display a screen for the agent to use. It was never meant to handle multiple campaigns. And it ran multiple campaigns. How, you ask? The way it displayed the screen was by hooking into Internet Explorer. It was essentially a web page. So someone had the brilliant idea of turning that landing page into an automatic redirect, based on the number the call was coming in on. The redirect was equal parts javascript and VBScript. It was equal parts brilliant and headache inducing. Campaigns were basically an internal website. New campaigns had to be added to the switch database with unique identifiers, then to that landing page with the proper redirect from the identifiers, and the campaign uploaded to our local web server so that it'd work. For anyone who missed it, we used VBScript to help us figure out where to punt the agent to. I know what you're thinking, and the answer is Yes. Yes, all our campaigns were written in Classic ASP. The whole thing was a Cthulu-esque nightmare inducing heart attack, one step away from complete disaster. Because not only was the switch using Internet Explorer, it expected a certain version of Internet Explorer. IE 6, to be precise. Anything newer, and baked in updates ensured nothing worked. Anything older, and the security settings we needed to change didn't exist, and then nothing worked anyways. So we couldn't even update the call center computers. Back to the story. Guess which one broke? The one that ran most of the company, of course. We had a spare, and that's what was already active in the new location. It, however, had been inactive for years, and was as of yet untested at load. It'd been turned off for a reason, and we had no idea when it would decide to finally give up the ghost completely. And now the unstable spare was the production switch. The one running most of our business. And we had no backup. And no way to get another one; that company was kaput. All we could do is cross our fingers, hope the spare was going to last, and start trying to figure out how to migrate campaigns off of it onto Fireworx. And maybe try to get the old one working again. Nobody had time to do that, of course. The management thought process could be summed up as, "It's working, what are you wasting time on?" We had bigger fires to deal with. Such as getting everything ready for our permanent space. Temporary location, remember? I'll skip much of the tribulations between Move #1 and Move #2; you've got a fantastic idea of how much of it went, and skip to the aftermath. The new space was supposed to be our masterpiece, the example to show off to clients that we were The Company to do this. Our new server room was supposed to be part of this, so no expense would be spared making it the shining jewel of our workplace. So the new room was much bigger than our old, crowded one. It would be a glorious feather in our cap to have things humming, green lights blinking quietly, cables organized, and...things just working. Like they should. We were even promised to get some temperature monitoring. What we got...was something else completely. To save money, the electricians also ran the network cables. We got to use some old overhead cable management racks to run the bundles across the building. Halfway through the build, we realized something like 90% of the network cable didn't work. None of it was labelled. All this hanging cable, and nobody knew where any of it went. We had to hire someone else to test each and every one, and re-run the ones that didn't work. The (new! beautiful! state of the art!) server room was placed right beside the call center floor. Nowhere else to put it, apparently. Building management even put in brand new ventilation and A/C for our space, to ensure our poor, overloaded servers weren't going to die of heat exhaustion. That A/C was tied into the same zone as the call center floor. Who were always complaining that it was too cold. I couldn't blame them; it had to counteract all the heat the servers were throwing off. So it was a daily battle for the thermostat, which, for some inscrutable reason, was placed directly in the middle of the call center floor. So we'd drop it to ensure our servers would keep working, the grunts on the floor would raise it because they were freezing. Once we had finally finished moving everything to the new building, we allowed one sigh of relief before moving on to the next fire. I was rightly proud of the work we'd done on that server room; we'd busted ass to make it clean and neat, and I wanted to show it off. I invited a couple friends from my nearby studio to come check it out one Friday night. Neither of them were IT, but they were happy to come by. I ushered them past the still working call center people, and got to show off my hard work. The next day, I got a call from my boss. "Server room's overheating. Need help." By the time I arrive, the boss had bought a couple of those rolling air conditioners, and was hacking holes in the wall to vent the waste heat onto the call center floor. We got those running, and let him know that things were working yesterday evening when I had brought some friends by. He had no issues with it. Good boss, remember? So come in Monday, and senior management is livid that unauthorized people were in the server room. Good boss, but not enough seniority to stop the hammer. They were paranoid that somebody had maybe stolen some data or sabotaged the server room. Yeah, right. A/C dying wasn't from showing off the room. If anything, the daily battle for the thermostat was to blame. Aftermath: I got written up for showing off the fruit of my hard work. That was the start of the A/C battle. It would die, we'd tell building management, they'd get it working again, rinse and repeat. I used to check the server room several times a day, I was that paranoid. I never did figure out why it kept dying. But, then, I had other issues to deal with. Like upgrading our SQL Server... [link] [comments] |
You are subscribed to email updates from Tales From Tech Support. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment