Was the number of servers in the company’s data centre network overestimated by Google Watchers? More than 1 million Google server counts have been put in recent figures. However, recent Google data shows that Google possibly has around 900,000 servers in operation.
Google never informs you how many machines the data centres are running. The latest estimation is based on knowledge provided by Professor Jonathan Koomey of Stanford, who has recently published an update study on the use of data centre electricity.
Google boss David Jacobowitz told Koomey that the energy used by the data centres of the organisation is less than 1 percent of the 198.8-billion kWh — the approximate worldwide overall power consumption of data centres of 2010. That ensures that Google will use an energy base of approximately 220 megawatts across its entire global data centre network.
“Google’s data center electricity use is about 0.01% of total worldwide electricity use and less than 1 percent of global data center electricity use in 2010,” Koomey writes while cautioning that his numbers represent educated guesses extrapolated from the company’s information.
“This result is in part a function of the higher infrastructure efficiency of Google’s facilities compared to in-house data centers, which is consistent with efficiencies of other cloud computing installations, but it also reflects lower electricity use per server for Google’s highly optimized servers.”
Also Read: Are Samsung Phones Made in China? Read More
Table of Contents
High-Efficiency Data Centers, Low-Power Servers
The data centres of Google are planned to benefit from the best practises of architecture and service in the industry. The business pioneered the activity of warmer appliances and the creation of energy-efficient chiller-free datacentres. The Custom Servers of Google feature a power supply combined with a battery, meaning that a custom power supply (UPS) will act as continuous. The architecture switches the UPS and the battery recovery features to the device cabinet from the data centre.
Google is equipped to handle far bigger server fleets in the future. A modern storage and processing framework named Spanner has been developed by the company to simplify the operation of Google resources across several data centers. This involves automatic resource distribution across ‘entire computer fleets’ ranging from 1 million to 10 million machines
In addition to not disclosing server counts, Google also doesn’t release data on the electricity it uses or provisions for its data centers. Local reports have suggested that Google arranges power capacity of 50 megawatts and beyond for some of its largest data centers. If the company is actually running its infrastructure using just 220 megawatts of power, that would suggest that Google is provisioning power for significant future expansion at these sites.
Cloud Vs Local Servers – Where to Store Data?
Data are the most important weapon in our study as scientists. Good data helps us to grow in our jobs and misdemeanours can interrupt this easily. In an era in which subjects such as data aggregation, data consistency, and open access to data are increasingly common, this article ensures you can determine correctly when it comes to the storage of your research.
Everyone speaks much about data privacy, security, and safety, but where should the data be placed? Here we will give you an insight into the advantages and drawbacks of accessing the research data from a cloud vs. a local server.
- Cloud and local servers
- Cloud pros & cons
- Local server pros & cons
Local & Cloud Servers
The constant digitization of the laboratory ecosystem translates into a tremendous increase in data generation at a digital level. This data, whether it is raw data or in your lab notebook, needs to be securely stored somewhere and two options are available: Cloud or local Servers.
A cloud is a type of a server, which is remote (usually in Data Centers), meaning you access it via the internet. You are renting the server space, rather than owning the server. A local (regular) server is one that you do buy and own physically, as well as have on-site with you.
Also Read: What is Intel Rapid Storage Technology? Explained
Cloud Servers – Pros & Cons
You are already using several cloud-based tools including email providers (Gmail, Outlook, etc…), storage/backup software (iCloud, Dropbox, Box, etc…) and all social media platforms that you might have an account in.
Pros
- Maintenance & upgrades
- Easy adjustment of storage space
- Data stored remotely
- Accessible wherever there is internet access
The first pro of using a cloud is that the cloud provider handles all of the maintenance and upgrades. This means you have one less thing to worry about. It is also easy to up or downscale the amount of space in the cloud. So, you are just paying for the amount you need.
The data is also stored remotely and never stored on your computer, meaning it is not occupying space unnecessarily. If there are technical issues on site, your data will be safe in the cloud. A final pro is that you can access the data stored in the cloud from wherever there is an internet connection.
Cons
- Cannot access data without the internet
- Transferring data out of the cloud
On the other side of accessing via the internet, a con can be that if your internet connection is not very strong you could have trouble accessing the data. However, with some software, you are still able to access the data offline. But you will either be unable to edit the data offline or you can edit it and then it will sync later. You will also need to check how easy it would be to transfer the data elsewhere should you stop using the cloud.
Also Read: What is Intel Optane Memory? Here is what it means
Local Servers – Pros & Cons
In your research group, department or institute you might already have a local server available. Instead of storing your microscope data in the microscope computer, you are transferring it to another storage device, so that you can access it from other computers and also assure that the microscope computer does not get filled with data in 1 day.
Pros
- Up/download speed
- System set-up control
- Security
The first pro of using a local server is the speed. The speed refers to that with which you can up/download data to the server. You also have total control the system setup, to make sure it fits your exact needs.
The control also extends to your backups, and everything else to do with the data since you own the server completely. It may also feel more secure to have a local server, onsite, since only you and your team can physically, and of course digitally, access it.
Cons
- Installation of expensive hardware
- Will need maintenance
The main con of installing a local server is needing to install it and then maintain it. Sometimes the hardware can be costly and if problems arise, you will need to do the troubleshooting. However, this would, of course, be where the IT team would come to save the day!
How Much faster is Google Server than a Normal PC?
How much faster is the Google server than a normal PC?
Besides the fact (as others have mentioned) that Google (and other major sites) do not use ONE server but millions of them in multiple giant computing centers, they also use another trick to respond faster: they store a lot of the common elements of the web pages in “edge cache” that is near your home.
It works like this:
Look at the web page you are viewing right now. There are parts that are very specific responses to your input, but many things are pretty generic and *everyone* sees them. Some are even quite static – icons and logos that don’t change for months or even years.
When you access a host system, your inquiry may go a long way over the internet to get to the host, and the host responds over that same path. That takes time. We might measure it in fractions of a second, but it still adds up.
So, a long time ago someone got clever and figured out that they could save time (and bandwidth) if the content that many people saw was stored near their location instead of at the host site. So they pre-positioned things like graphics that require a big file at ISPs who agreed to host their content. This meant that the time it takes for you to get their entire web page is much quicker.
For some, the speed of response was critical to their business. If you do a search, for example, you want the answers to come as quickly as possible.
These days, it is a common technique used by all the big web sites. Tens of thousands of internet service locations support “edge cache”. Back in the late 1990s, my job was to develop the procedures to implement 5,000 new locations per year, using satellites to broadcast the common content all over the world. We gave each participating ISP $30,000 in satellite equipment, which reduced their connection fees.
Source : Quora