Monthly Archives: January 2015

Google Voice bug shuts down incoming calls

googlebugA bug introduced by programmers at Google disrupted inbound calling for me this week, along with other Google Voice users who use Obi products to make GV telephone calls, according to a post by top contributor Bluescat on the Google Voice forums this morning.

I use the Obi100, a small box that connects to your telephone and router to let you make toll-free telephone calls using your Google Voice number.

I use Google Voice as the master telephone number for my business, and the Obi lets me make and receive calls from my cordless desktop headset. As scheduled calls began coming in, it didn’t take long to figure out that something was horribly wrong. My headset remained silent. I could call out, but I could not hear or pick up incoming calls.

Fortunately, I had configured GV to also forward incoming calls to my mobile. But cellular reception is poor in my office, so as a workaround I forwarded incoming GV calls to a back up line in my office while I spent the next hour trying unsuccessfully to troubleshoot the problem. That’s the nice thing about GV: You can forward your GV telephone number to ring to as many other phone numbers as you like.

“I’ve been told that the Google engineering staff is working on this now.  This wasn’t intentional.  There was a bug that caused the Chat ‘phone’ to disappear off of the GV settings Web page.  It’s estimated that this will be resolved today (Thursday, 1/29) some time — probably by EOD,” Bluescat posted.

For Obi to work you must forward your GV number to the “Chat phone” option that appears in your Google Voice phone settings page. It was that option, which suddenly disappeared from the GV settings screen, that created the problem.

Kudos to Bluescat for tracking this down. A fix can’t come soon enough. But I’ve also learned a valuable lesson about depending on free services to support my business – and having alternatives when my primary service goes down.

So, in addition to having a backup office line, I’m also beefing up my local Verizon cellular service with a Verizon Network Extender, a  wireless box that acts like a cell tower in your home and routs your mobile calls to Verizon over the Internet. At $250 the box is expensive, but you can find them on eBay for about half price, and there’s no monthly fee from Verizon to use it.

Update: Computerworld Blogger JR Raphael (Android Power) tells me that his Obi still rings even though the Google Chat option disappeared from his GV settings. Apparently not all users are affected in the same way.

Advertisements

Adding the “cool factor” to private cloud architectures

icecubesAsk IT executives what’s driving initiatives to build their own private cloud infrastructures and they will tell you that it’s all about operational efficiency within IT.  It’s about agility — and driving down labor costs. Having already virtualized servers, enterprises are now working on software defined storage and networking in a bid to eliminate the need for manual configuration, to enable user self-service when fulfilling IT infrastructure requests, and to allow infrastructure to respond automatically to the collective needs of all applications running in the data center.

The motivation to cut labor costs makes sense. After all, labor is one of the biggest parts of the IT operating budget. But the next biggest cost is power and cooling, and not all enterprises are taking that into account — at least not yet. CIOs focus on software defined servers, storage and networks. They don’t always realize that they should wrap power and cooling into their software defined data center infrastructures.

Data center energy costs aren’t always part of the IT budget, but with data center power consumption rising, developing an architecture that can dynamically optimize power and cooling loads will be an important consideration to the business as private cloud architectures evolve. By the same token, not factoring that into your planning horizon could be risky.

So far, not much has been done to extend private cloud automation all the way from the application layer down through server, storage and networking and into power and cooling systems. Fully software defined data centers (SDDCs) could step down server processors as application workload requirements change, as well as shut down power distribution paths and relocate virtual machines into one area of the data center during off-peak, evening hours to save energy. While power and cooling system vendors have made data available through APIs, the data is often unused in enterprise-grade software defined data center systems.  OpenStack, for example, one of the most popular open source architectures upon which enterprises are building SDDCs, doesn’t even touch upon the issue.

Large Internet giants like Google and Microsoft, who have built custom cloud infrastructures from the ground up, have been working with power and cooling system vendors for some time.  But enterprise customers have been mostly silent. “The Internet giants are asking us to do more than our enterprise customers are, especially on converged IT infrastructure,” says a spokesperson at Schneider Electric.

However, some forward thinking enterprise IT organizations are already laying the groundwork. Kevin Humphries, senior vice president of IT at FedEx Corporate Services, says that storage and networking are currently dominating the discussion because those have been the hardest parts of the data center to automate. But ultimately, all architectural layers will be software defined, from the application layer all the way down to power distribution, heating, cooling – even white space. “The way we envision the evolution of our data centers and infrastructure, software rules everything,” he says.

Das Kamhout, principal engineer at Intel, says the chip maker started with a custom-built private cloud a few years ago, when commercial software defined networking and storage products weren’t ready for prime time. Since then it has moved to a software defined data center architecture built using OpenStack-compliant components. Using low-level power and cooling system data to make decisions about moving resource pools or stepping down servers to save energy is coming. “Today it’s in the very basic stages,” Kamhout says, but it will evolve as private cloud automation tools continue to advance.

Related Story: