• Breaking News

    [Android][timeline][#f39c12]

    Saturday, December 28, 2019

    No, we have no idea what caused that scorch mark Tech Support

    No, we have no idea what caused that scorch mark Tech Support


    No, we have no idea what caused that scorch mark

    Posted: 27 Dec 2019 12:44 PM PST

    Hello! I've loved this sub for quite a while, and wanted to contribute a little story of my own. Hopefully a hardware tech support story counts

    Not too long ago I was at a startup designing a power meter. I was stoked as this was my first ever big girl job out of school, and this project was my baby. Me and my colleague put it together skunk works style in about a year, and it could do fancy real time power reporting and analysis for grid usage and power production (via Solar, wind, etc). I was so damn proud of it, and when we got our first big order in I was really excited for the first installs.

    We'd put together training days, manuals, docs, software to configure them, and thought we had it all down and ready for the big day. We designed and sold them, but the final installs were done by electricians of the company that bought them. Since they aren't engineers, we wanted it to be as simple to install as possible. All it took was 4 screws, and 6 wires from the panel, all colour coded and labelled on our device. Power it up, tell it what network to connect to, and that's all that was needed - what could go wrong, right?

    Cue the very first install. It was early Monday, I'd barely had my coffee, and a call was escalated to me. Since we were a startup the chain was Level 1 -> Engineers who made the things. The electrician on the phone was already upset, and moaning that our device wasn't working at all. So I got him to walk me through how he installed it, what happened, and what he sees. He says he wired it in just like we said, powered it up, and nothing happened and our service software didn't recognise it.

    I got him to open it up, and try and put it in service mode. This should blink out any errors the device sees or wiring faults. Before he can even push the button, he says the little service light on the board is just freaking out and blinking randomly. Not good ): Ask him to check the colours are matching, says it's all fine and keeps complaining that our unit is junk. Nothing more to do at this point, so I just tell him to RMA it to me directly, and we'll send him a new one.

    Now I'm pretty bummed out at this point. First install and not a single thing went right, all the checks we put in didn't help. I'm itching for this unit to come back, so when it finally does we bring it to the lab expecting some crazy firmware bug we just didn't catch. Open the lid, and it just stinks of burned electronics. Right near where power comes in there's an obvious black mark on one of the traces. Pull the board out, flip it over, and our processor is just a damn crater.

    Follow the burned traces back, and it goes to our sensor input. This sensor input is supposed to receive ~1V, and this guy gave it 240V. Now this sensor connector is tiny, and obviously not where you power it, yet this guy managed to force the wires into it anyways. Because reading labels and matching colours to colours is too hard. Why, just why. On a follow up call the guy denied wiring it wrong or hearing any sad electronics blowing up.

    submitted by /u/Barlocore
    [link] [comments]

    Roaming

    Posted: 27 Dec 2019 04:50 AM PST

    We have a handful of IoT sensors in various places that report telemetry such as humidity and brightness. These have data SIMs that support virtually any network and will roam anywhere in the world. For about 10-40 MB per month, because that's effectively all they send.

    A call came in from the carrier, asking about triggering a budget threshold on a single device. We take a look and also ask them which one is responsible and where the device physically is.

    It turns out one of the staff have taken a SIM out of a monitoring blackbox, put it in their mobile because their own plan doesn't work overseas, and gone on holiday with it, using 20GB or so, entirely on full speed roaming.

    These SIMs support hundreds of countries and networks, from 2G to 4G, at a rate of about 0.50 USD per MB. I requested a disable. The next day, there is an email sent to the entire distribution list for net ops asking why their now disabled mobile plan isn't working and that it's inconvenient. This was forwarded to their manager. I don't know what comes next, and I'm off tomorrow.

    Edit: I'm told by a teammate that it's been negotiated down to about 12-19c per MB as a bulk purchase of block data instead of PAYG overages. The person involved is not being terminated at the time, but their department has been chargebacked for the amount. Not nearly as exciting as expected, but they have a nasty meeting to return to.

    submitted by /u/PM_ME_BAD_SOFTWARE
    [link] [comments]

    Pop and no one gets paid

    Posted: 27 Dec 2019 05:19 AM PST

    So in my first ever job I worked at a small IT firm

    The company was tiny and was always getting local work just because we were close by

    So a large construction firm bring in a server to have a additional hard drive fitted

    This was back in the IDE days

    The server was a payroll machine and was in a pretty nice rack mount case

    So they have opened the case and fitted the drive and with the case still open they power the computer back on

    One of the engineers who hasn't seen many motherboards asks his colleague what something was....pointing a screwdriver at it

    The tip gets too close and there is a loud pop

    The server goes off, will not power

    Faint burnt smell

    They established something had blown on the board and it would take a few days to get a new board

    The construction firm proceeds to tell all there staff they aren't getting paid because of us

    We ended up with 50+ angry bruisers hammering on the door (we locked it) demanding we fix the "puter" now so they can get paid

    We didn't see that customer again

    submitted by /u/warmachine83uk
    [link] [comments]

    This printers are Divas. Every single one of them. But stupidity of people is far worse

    Posted: 27 Dec 2019 05:53 AM PST

    So i work for a major company in their IT Despartment as IT-Technician. We have multple printers in use. The printer model im talking about here is specifically the Zebra ZT410. They are used to print Shipping Informationonto self-sticking Labels and are connected via USB to ThinClients who the shiiping guys work at. We have certain standards on the printers (e.g. Firmware, Type of Labels/paper and so on) Most problems we have are usually solved by updating FW, rebooting or recalibrating. Simple stuff am i right?

    I was proven wrong.

    These printers decide when they want to work by themself it seems. We can go days with 0 issues. But ocasionaly there are errors. The thing is, that they would fail to work en masse. Whole lines of printers suddenly stop working over night. We had to manually reconfigure and recalibrate 20-30 printers in a 2 week rythm. These odd regularities made me wonder why the printers lost their FW and Config.

    After some investigating i found out, that the manager who oversees the whole shipping area was on duty the night before these incidents. So i talked to him about that matter, whether he saw anything out of the order in those nights. He denied seeing anything when he updated the Printers "as usual" - - -

    Me: "As usual? what do you mean?"

    M: "Your Manager told me that the printers needed to be updated on a regular basis. So i have this USB-Drive to update them."

    I look at the drive and see the FW is about 1 year old and outdated. He flashed every printer with an old FW which also made them lose their compliance with the System accessing them and also made them lose their settings

    Me: (internally Facepalming) "Why do you do this? You got that drive about one year ago, right?"

    M: "Yeah, but that doesn't mean i can't still use it to update the printers, does it?"

    He actually thought that the FW on the drive was magically overwirtten with the newest FW somehow and he could use it to Update them (Which he wasn't supposed to do at all anyway, updates are done by the IT-Guys, not by some manager)He had the intention to make our Job easier, by taking such simple tasks from us. So his heart was right, but his mind not capable enough.We set a new Password on the printers and told him, that we appreciate the effort, but in order to keep documentation in line we had to do these Updates and checkups ourselves. (lying to not embarass him)

    We had 0 Problems with them (outside of small ones like Labels Stuck inside) after this

    submitted by /u/JumboGER
    [link] [comments]

    The longer it takes, the dumber the solution

    Posted: 27 Dec 2019 06:00 AM PST

    Thursday , 8:30PM, the phone rings and my boss is on the line. He has a problem with one of his biggest clients. Old servers (2008 & 2012) keep shutting down for no apparent reason. The wired phenomenon isn't affecting the new 2016 servers. The affected systems are on two different ESXI hosts.

    At first we thought it has something to do with the host. Quick glance to reddit's r/vmware and google. Nope, no big outage worldwide. It's just our system.

    Next I looked at Vmware's logs for the machine (located on the same folder as the machine's VMX and other files). Nothing in particular, just wired note about vmware tools 'legacy' version, at the beginning of the shutdown sequence.

    My hunch at this point was that the problem is initiating from WITHIN the server and not from the host. All the event viewer could say is:"Legacy API shutdown". No help there.

    We tried repairing vmware tools and then removing them at all. We tried removing ESET, Veeam, and other software that we thought may be the ability to affect the system. We checked that the WMI is blocked at the firewall.All this time. my boss keeps getting calls from the company about estimates for solving the problem.

    And then, when all seem lost (kidding, but indeed we had no clue at this point, and it really difficult to troubleshoot when you getting kicked from the session all the time and need to start the machine again), I saw a little popup message "UPS battery low, the computer will shutdown in a minute".

    SERIOUSLY??? 2.5 hours for this??

    That's not the end of it. There is no one in their main office at this hour to verify the electricity condition. We woke up one of my colleagues to try and check trough the cameras what the condition is. His sleepy response that everything is OK.So we tried to remove the UPS software from the server to stop the shutdowns until we could check on the UPS in the morning. Not a 10 minutes passed, and the UPS had finally emptied and all the servers lost power.

    Apparently, a slippy colleague is like a client, never to be trusted, Especially, if he is the one that has go out to the weather, drive 40 minutes each way to flip the switch back up... Like always: Trust technology more that humans.

    The frustrating part was that the longer it takes, you know that in the end, it would be something dumb. Since the UPS software was installed only on the old servers, that throw us off course and caused us a very long evening. I hope he will configure the UPS to send a email next time...

    TLDR: Servers keep shutting down. After long and painful troubleshooting, the UPS is doing it's job. since the electricity was down.

    submitted by /u/BedekComp
    [link] [comments]

    It's all Word all the way down

    Posted: 26 Dec 2019 05:18 PM PST

    A few years ago I worked a few months for an office supply retailer in their technology department, which mostly meant selling printers and ink, but also involved selling computers and providing "tech support."

    I was suspicious of the "tech support" we did at the time, since it consisted solely of downloading an app to allow a contractor from another company to remote in and do something on the computer.

    Through some news reports this year, I've since learned that my suspicions were correct, as the sole purpose of the "tech support" was that it would end up saying there was a virus (whether or not there actually was one) so that the customer could be charged for virus removal, etc.

    So, one day, a customer brings in a computer. I don't see what the issue is because I'm busy at the time and a different employee takes it in and starts to run it through the standard "tech support" procedures.

    The next day, I check on it and learn the issue it was brought in for hadn't been fixed.

    The issue was that the computer tried to open every file type with Microsoft Word. This meant they hadn't actually been able to even download the "tech support" software, let alone let the whole process play out, which I highly doubt would have solved the problem anyway.

    Fortunately, I had seen this same issue before at a different job and I knew it could be fixed with some registry edits. A quick search on one of the display laptops found a script to make the fix, and after downloading it and copying it over to the customer's machine on a USB drive, the problem was quickly solved.

    Of course, because of company policy, once that was fixed they still had to go through with the standard "tech support" procedures and "remove the viruses" or whatever they told the customer. But I could at least be happy knowing that, in this one case, the customer's problem was actually solved.

    submitted by /u/MOOPY1973
    [link] [comments]

    No comments:

    Post a Comment

    Fashion

    Beauty

    Travel