Deprecated: Non-static method BESTBUG_CORE_CLASS::enqueueScripts() should not be called statically in /home/prewebairy/public_html/wp-content/plugins/vcheader/index.php on line 104
 

Archive

High Availability and Disaster Recovery

The terms High Availability (HA) and Disaster Recovery (DR) are frequently heard and are used interchangeably. Which is why it is important to clarify the distinctions between the two in order for companies to understand the unique capabilities, the role of each and how they can be used most effectively within their organization. High Availability is a technology design that minimizes IT disruptions by providing IT continuity through redundant or fault-tolerant components. While Disaster Recovery is a pre-planned approach for re-establishing IT functions and their supporting components at an alternate facility when normal repair activities cannot recover them in a reasonable timeframe.

Can Disaster Recovery Include High Availability? Disaster recovery can, and it often does, include high availability in the technology design. This configuration often takes the form of implementing highly available clustered servers for an application within a production datacenter and having backup hardware in the recovery datacenter. Data from the production server is backed up or replicated to the recovery datacenter; systems are both protected from component failures at the production datacenter and can be recovered during a disaster at the recovery data center.

You may also come around end users chatting about adding a “business continuity disaster recovery” solution when they really intend to make a service highly available. More often than not elements of both high availability and disaster recovery are blended in the discussion. If you’re in the role of a service provider listening to requirements and asking for clarification regarding these questions it will help by identifying if geographic diversity is needed and how much down time can be tolerated before a system is restored. Through this you’ll know what to look forward to, and be set on the right path.

Web Server Farming: how do they work?

A server farm also referred to as server cluster, computer farm or ranch, is a group of computers acting as servers and housed together in a single location. A server farm works by streamlining internal processes by distributing the workload between the individual components of the farm and through which it expedites computing processes by harnessing the power of multiple servers. A Web server farm can be either , a website that has more than one server, or an ISP  (Internet service provider) that provides web hosting services using multiple servers.

The farms rely on load-balancing software’s that can accomplish such tasks as tracking demand for processing power from different machines, prioritizing the tasks and scheduling and rescheduling them depending on priority and demand that users put on the network. When one server in the farm fails, another can step- in as a backup. Such as in a business network, a server farm or cluster might perform such services as providing centralized access control, file access, printer sharing, and backup for workstation users. The servers may have individual operating systems or a shared operating system and may also be set up to provide load balancing when there are many server requests. As for on the Internet, a Web server farm, or simply Web farm, may refer to a Web site that uses two or more servers to handle user requests. Typically, serving user requests for the files (pages) of a Web site can be handled by a single server. However, larger websites may require multiple servers.

This is why combining servers and processing power into a single entity has become and have been relatively common for many years in research and academic institutions. As of Today, more and more companies are utilizing server farms as a way of handling the enormous amount of computerization of tasks and services that they require through Web Server Farming.

Usages of Proxy Servers

In computer networks, a proxy server is a server (a computer system or an application) that acts as an intermediary for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resources available from a different server and the proxy server evaluates the request as a way to simplify and control its complexity. Proxies were invented to add structure and encapsulation to distributed systems. Today, most proxies are web proxies, facilitating access to content on the World Wide Web and providing anonymity. There are 3 main types of proxies: Forward proxy, open and reverse proxy. A forward proxy is the same one described above where the proxy server forwards the client’s request to the target server to establish a communication between the two. An open proxy is a type of forwarding proxy that is openly available to any Internet user. Most often, an open proxy is used by Internet users to conceal their IP address so that they remain anonymous during their web activity. Unlike a forwarding proxy where the client knows that it is connecting through a proxy, a reverse proxy appears to the client as an ordinary server. However, when the client requests resources from this server, it forwards those requests to the target server (actual server where the resources reside) so as to fetch back the requested resource and forward the same to the client. Here, the client is given an impression that it is connecting to the actual server, but in reality, there exists a reverse proxy residing between the client and the actual server. Reverse proxies are often used to reduce the load on the actual server by load balancing, to enhance security and to cache static content, so that they can be served faster to the client. Often big companies like Google which get a large number of hits maintain a reverse proxy to enhance the performance of their servers. It’s not a surprise that whenever you are connecting to google.com, you are only connecting to a reverse proxy that forwards your search queries to the actual servers to return the results back to you.

DNS Failover, Overcome your downtime!

As your business grows, it becomes more and more mission-critical, and any amount of downtime is damaging. You could potentially lose hundreds if not even thousands of dollars for every minute your site is down. Not to mention, it may also hurt your brand image and customers confidence. This is why firms and individuals today rely mostly on DNS failover. DNS Failover monitors your server and if unavailable for a certain period of time it will dynamically update your DNS records accordingly so that your domain name points to an available server instead.   DNS Failover is essentially a two-step process. The first step involves actively monitoring the health of your servers. Monitoring is usually carried out by ping or  ICMP (Internet Control Message Protocol) to verify that your HTTP server is functioning. The health of the servers can be assessed every few minutes, while more advanced services allow you to configure your monitoring time settings. In the second step, DNS records are dynamically updated in order to resolve traffic to a backup host in case the primary server is down. Once your primary server is back up and running, traffic is automatically directed towards its original IP address. Some of the reason why outages can and often do occur, happen to be because of: Hardware failures, Malicious attacks (DDoS, hackers), Scheduled maintenance and upgrades, man-made or even natural disasters. This is where DNS Failover helps prevent somewhat of downtime, allowing firms/individuals with time to fix and take care of the problems occurring. Though it may seem DNS Failover is the complete package, it does not come without limitations. In order for it to work, you need to have backup locations for your site and applications. Even if DNS records are quickly updated once an outage has been detected, ISPs need to update their DNS cache records, which is normally based on TTL (Time to Live). Until that occurs, some users will still be directed to the downed primary server.

Network latency

Network latency is the term used to indicate any kind of delay that happens in data communication over a network. Network connections in which small delays occur are called low-latency networks whereas network connections which suffers from long delays are called high-latency networks. High latency creates bottlenecks in any network communication. It prevents the data from taking full advantage of the network pipe and effectively decreases the communication bandwidth. The impact of latency on network bandwidth can be temporary or persistent based on the source of the delays. Both bandwidth and latency depend on more than your Internet connection – they are affected by your network hardware, the remote server’s location and connection, and the Internet routers between your computer and the server. The packets of data don’t travel through routers instantly. Each router a packet travels through introduces a delay of a few milliseconds, which can add up if the packet has to travel through more than a few routers to reach the other side of the world. However, some types of connections – like satellite Internet connections – have high latency even in the best conditions. It generally takes between 500 and 700ms for a packet to reach an Internet service provider over a satellite Internet connection. Latency isn’t just a problem for satellite Internet connections, however. You can probably browse a website hosted on another continent without noticing latency very much, but if you are in California and browsing a website with servers only located in Europe, the latency may be more noticeable. There is no doubt to it that latency is always with us; it’s just a matter of how significant it is. At low latencies, data should transfer almost instantaneously and we shouldn’t be able to notice a delay. As latencies increase, we begin to notice more of the delay.

What is Virtualization? Why go for it?

When you hear people talk about virtualization, they’re usually referring to server virtualization, which means a combination of software and hardware engineering that creates Virtual Machines (VMs) – an abstraction of the computer hardware that allows a single machine to act as if it were many machines. Each virtual machine can interact independently with other devices, applications, data and users as though it were a separate physical resource/unit. Why is Virtualization used? Virtualization is being used by a growing number of organizations to reduce power consumption and air conditioning needs and trim the building space and land requirements that have always been associated with server farm growth. Virtualization also provides high availability for critical applications and streamlines application deployment and migrations. Virtualization can also simplify IT operations and allow IT organizations to respond faster to changing business demands. Virtualization may not be a magic bullet for everything. While many solutions are great candidates for running virtually, applications that need a lot of memory, processing power or input/output may be best left on a dedicated server. For all of the upsides of virtualization isn’t magic, it can introduce some new challenges as well for firms to have to face. But in most cases, the pro’s of cost and efficiency advantages will outweigh most if not all the cons and virtualization will continue to grow and gain popularity in today’s world.
×

Hello!

Click one of our representatives below to chat on WhatsApp or send us an email to info@webairy.com

× WhatsApp Chat