Denial of Service Prevention Techniques

Abstract

Denial of service is one of the major issues in networking. The attackers attempt to make computer resources unavailable to users. There are several techniques to prevent denial of service. Each one of these techniques has advantages and disadvantages. In this paper, I propose to explain some useful techniques extensively and comparatively. The purpose of this paper is to know which technique to use when. Thus, we can be protected from denial of service’s threats

Cut 15% OFF your first order
We’ll deliver a custom E-Governance paper tailored to your requirements with a good discount
Use discount
322 specialists online

Introduction

Denial-of-service, also known as DoS, is an attack attempt in which a computer user is denied of services, which are supposed to be accessed by all. Although the attacks may differ, the practice is usually intended at preventing a computer service from functioning effectively or not functioning at all and may be short-lived or indefinite. DoS attacks normally target sites or services run on high-class web servers like financial institutions, government records and root name servers. Apart from paralyzing the intended computer, the attacks may also cause problems to all the computers connected on the same network and if carried out on a large scale, it may lead to the internet connectivity problems in a large geographical region.

The problem has been worsened by the increase in internet usage in recent years, which has led to a corresponding increase in the number of hosts. The interconnections of the infrastructure that make up the network for the internet usually comprise limited resources. The power to process, the bandwidth and the storage capacity all form the commonest targets by attackers. These resources can be used up and result in disruption of services. The DoS attacks are often launched from any point or points that the users’ internet forms external links. In many occasions, the launching point is one or more of the systems, which are undermined or sabotaged by the intruder through a security-related concession rather than using the intruder’s individual system. A DoS attack can be implemented through different ways, the main types of attacks are:

  • Usage of computational resources like processing space and bandwidth
  • Interruption of the configuration process
  • Interruption of state information, e.g., of the internet protocols
  • Interruption of peripheral networking tools
  • Blocking the communication channel between computer users

The purpose of the paper is to discuss the techniques used to prevent Denial-of-Service attacks, underlining how each one works and the advantages and disadvantages of each.

Distributed Denial of Service

Denial-of-Service is a technological malpractice in which a computer user is deprived of computer resources, which he/she is supposed to access. Such practices have been on the rise due to the ease by which tools used in attacks can be obtained, these include Trinoo, TFN, Stacheldraht, Shaft, Mstream, Knight and Trinity among others. However, advances in technology have made it possible to repel such attacks.

The methods used in preventing attacks attempt to stop the signatures from being launched and updates machines with security details. Two major techniques are used: the general method and the filtering process. General techniques include disabling of unused network services and installing security updates on the system. Firewalls can also be used in curbing attacks. There are three categories of firewalls in use; these are application gateways, packet filter and the hybrid system. Filtering is an efficient process at cutting out DoS attacks and can be carried out through a number of ways. Types of filtering techniques include Router-based filter, IP filtering and Secure Overlay Services. However, with all the tools dedicated to fighting this vice, DoS attacks still on the rise. The following techniques have proven successful in managing DoS attacks: web traffic over-provisioning, redundant monitoring to detect instances of attacks and dumping the web server logs in order to facilitate system recovery from an attack.

On-Time Delivery!
Get your customized and 100% plagiarism-free paper
done in as little as 1 hour
Let’s start
322 specialists online

DDOS Attack Tools

Distributed Denial of Access is a malicious practice described by depriving the legitimate user(s) of particular services like emailing or connection to the internet, a services that they are normally entitled to (Douligeris & Mitrokotsa, 2004, p. 645). Usually this problem results from overload of resources like memory, CRU cycles and bandwidth among others.

The main reason why DDoS attacks is so much prevalent and easy to carry out is because the attack tools are readily available and these tools are very powerful in generating attacking traffic (Park & Lee, 2001, p. 15). Several tools found on the internet allow this execution of attack on the targeted system. The common tools used include the following:

  1. Trinoo: this tool can be used efficiently to launch UDP attack that is properly coordinated. This tool can deploy master or slave architecture and the person launching the attack is able to control several trinoo machines. Communication between master and attacker is facilitated through TCP while that between master and slave is by UDP as the protocol (Douligeris & Mitrokotsa, 2004, p. 645). The system has passwords that have to be entered. These restrictions are used to protect the master and the slave. This way the two elements are safe from being invaded by another attacker.
  2. TFN is a tool that makes use of command line interface for communication. It links that attacker and the master control and provides no encryptions between the attacker and master neither between slaves nor between masters. Communication between master and slave is achieved through ICMP (Mirkovic & Reiher, 2004. p. 39). This is done by the echo reply and it can launch ICMP flooding attacks.
  3. TFN2K is an advanced TFN version. This makes use of TCP, ICMP and UDP or all of these for communication. This tool is able to execute SYN, UDP and Smurf flooding attacks. Communication is usually encrypted by use of key-based Cast algorithm (Mirkovic & Reiher, 2004. p. 39). This tool can also carry out some susceptibility attacks by sending out invalid or distorted packets.
  4. Stacheldraht utilizes a combination of the best features of TFN and Trinoo. This tool has also the ability to automatically implement updates on the slave machines. This uses the encrypted TCP link for communication. The attacker and master can be connected through this. An Attack daemon is effected by ICMP and TCP. This tool can execute UDP flooding and SYN flood (Douligeris & Mitrokotsa, 2004, p. 649).
  5. Shaft has been designed from the trinoo technology. Besides using port numbers for communication, the mode of working is very similar to trinoo hence it can switch the control servers and ports. This can be achieved in real time and this feature has been the reason detection of intrusion has been very difficult. UDP effects the communication between slave and master systems (Mirkovic & Reiher, 2004. p. 39). Shaft is able to execute UDP and TCP floods attacks.
  6. Mstream is the most primitive tool. This tool hence attacks systems with TCP ACK floods. The communication is via TCP and UDP though not encrypted. The master can be managed remotely by hackers who use shared logins that are password-protected. Source address found in attacking data blocks can be parodied arbitrarily (Mirkovic & Reiher, 2004. p. 41). This tool informs the master of the access whether successful or not.
  7. Knight makes use of the IRC as a control. Reports demonstrate that this instrument is very common. The tool is usually applied on machines previously compromised by Trojan. Knight is able to execute SYN attacks, as well as the UDP flooding. This tool is designed to work on windows OS with features like http for automatic updating.
  8. Trinity is tool that uses IRC and it can be abele to execute SYN, IP fragment, TCP ACK and UDP. Each compromised device is connected to a particular IRC port and wait for the commands. When a legitimate IRC services for communication between agent and attacker are used, then the need to have a master machine is eliminated and the level of threat increases.

Preventing Attacks

Protecting a system against DoS hit is too far from becoming a definite or complete scientific process. Rate limit, tweak of the parameters, and packet filter can occasionally assist in limiting impact of the DoS. Nonetheless, this is only at points where the DoS are making use of few available resources. Most of the time, only the defense is a reactive: in case the source of attack is identified and stopped. It does not hence proceed with the attack. Using source IP addresses spoofing when the attack is underway, the invention of distributed attack techniques and instruments have offered a constant challenge for the systems that have to react to DoS attacks.

Previously, DoS attacks involved uncomplicated machinery that create and send data blocks off a lone source aiming a single target. With time, this machinery has expanded to effect single source attacks against numerous targets, numerous sources against a lone target and from numerous sources against a number of targets. Currently, many attacks reported to CERT/CC include sending numerous packets to a destination that causes excess amount of endpoint and probably transit, bandwidth to be utilized. Such attacks are frequently identified as packet flooding. Single source attacks on single target are common and so are multiple sources attacking a single target. Multiple target attacks are not very common (Mirkovic & Reiher, 2004. p. 43).

Get a custom-written paper
You can get an original academic paper
according to your instructions
Let us help you
322 specialists online

Setting an attack is only few knocks away. This means that the attacker only requires making keystrokes to be able to attack the victims’ machine. However, it is possible for the victim to prevent such kind of attacks at the network burglary by ensuring that then network is configured to certain type of traditional security tool for instance a list of access, intrude detection or the firewalls. Nonetheless, regular benign traffic towards the network of the victim is not safeguarded and furthermore the victim may not be able to access other networks. Considering the current developments in technology, there are numerous challenges that are encountered during designing and execution of the effective DDoS tools. Some of these challenges include the following; a) a very big number of unsuspecting participants, b) there are no common features that can be used to describe DDoS stream. c) There was no administrative domain that was willing and ready to cooperate in domain. d) Most attackers make of the legitimate traffic models for attack, e) the tools are often automated, f) the internet users in most cases use hidden identity, g) the internet is described by consistent security holes. h) The attack information usually lacks and finally i) there is no standardized assessment and testing approaches so far.

For these reasons, there are at least five principles that are recommended for developing an efficient solution. Since denial of service is an attach that is distributed and due to the larger volume of attacks and the attacks which victim face, applying distributed defense mechanism rather than centralized system is considered to be the first line of defense of the DoS defense.

Secondly, there are numerous survivors in the Normal packet ratio. This means that not as much of collateral damage is required as a key prerequisite for DDoS defense. Thirdly, the DDoS methods should provide secure channels for communicating messages and control in relation to elements such as the message confidentiality, its authenticity and freshness of the information in the message among others. Fourthly, the deployable defense systems requiring no centralized control mechanisms are successful because most machines and systems used in intermixing require no centralized control devices. Finally, a defense system is supposed to take into consideration aspects like future compatibility like interface with other systems and seeking other defense strategies and policies.

There are also some characteristics that should be achieved by systems so that effective DDoS is ensured. These characteristics include the following: the systems should be invoked only during attacks and stay fine at other times for normal operations of the system to take place. This means that it must readily integrate with the existent architecture with minimal adjustments or modifications. The systems must also offers simples, easy and efficient solution for counter attacking the sources of attacks to curb attacks. The systems should be able to identify attacks at the target and stop it at the source of attack. The systems should also be uniquely designed to avert only traffic attacks from affecting the targeted casualty. This means that the model used must be able to differentiate between malevolent traffic flood and the usual benign flow. This can be achieved by incorporating a variety of attack signatures for a variety of attacking sources.

The systems should also have faster reaction time and must respond faster to any changes observed in the attack traffic pattern. Finally, the system should be able to offer means of retaining evidence of attack for substantiation. This can be useful for any case that may necessitate a lawsuit in future.

DDOS Prevention Means

The methods that are used in attempting to prevent attacks all attempt to stop the known signatures from being launched, updates machines with security details to fill gaps and uses broadcast based DoS in edge routers. Attempts to prevent attacks have not been very efficient because the systems are always susceptible to new and mixed attack forms of which the signatures and patches are not existent in the database. There are two major techniques that can be used in preventing DDoS; general means which are common and include system protection which when servers and ISPs use and do not become part of the DDoS attack. Second technique is the filtering process. This includes ingress filtering, SAVE protocol, packet filtering among others.

General techniques

There are several types that are groups under this option. One is disabling the unused services. If a host has less open ports and applications, there are lesser chances of exploitation and vulnerability for attackers. For this reason, when network services are not requires or are not used, then these services should be disabled to prevent attacks for instance UDOP echo.

A second attempt to find security is by installing the newer patches of security. Currently, DDoS attackers exploit susceptibility of the target system. When known security gaps are removed by installing the appropriate, latest security patches, then exploitation of the target system is reduced.

When the attack is based on intermediate broadcast nodes like ICMP flooding attacks and so on, it would be appropriate only when the host computer and the neighboring network have their IP broadcast disabled.

Firewalls

Firewalls can be very efficient in curbing attack. This prevents the attacker from launching flood attacks. Users of machines behind the firewalls are therefore safe. Firewalls habitually use very straightforward instructions to permit or deny protocol operation – the IP addresses. However, it is difficult for firewall to differentiate between the different forms of attacks because of their complexity.

Firewalls are currently among the famous network security tools that are offering protection to information systems and continue to increase. Nonetheless, these firewalls are most efficient when used in concert with other protective software like anti-viruses, intrusion-detecting systems among others. This requires a very efficient security plan that is properly designed to meet the requirements of security policy. Efficiency is enhanced this way. A firewall is essentially a set of components that together create a barrier between two networks.

These firewalls are able to filter the flow of data coming and leaving a system. The process utilizes single or numerous sets of instructions to inspect the network packets as they emerge or enter the network connection. After the assessment, the data can be allowed to come in or blocked accordingly. The instructions can assess one or several properties of the packets with inclusion and not limited to the kind of protocol, the source and target of the hosting system as well as the origin and target of the destination port.

Firewalls are very to a great extent beneficial to the security of the host systems or the said network. They can help in various functions that include:

  • Protection and insulation applications, servicing and supplying components of interior network from superfluous data from public internet
  • To Limit or disable accessibility of hosts of the internal network to the public internet.
  • To Support translation of network, address that consent internal network to use confidential internet portals (IP) and share single connection with open internet.

Firewall rule sets are often created ‘inclusive’ and ‘exclusive’ methods. Exclusive firewall permits all data to get in with the exception of those that match the rule set. Inclusive firewall on the other hand operates in the exact opposite, by allowing data to get in only if they match the rule set. All others that do not match are blocked (Anderson et al, 2004. p. 39). This therefore implies that inclusive firewall is positioned to protect or control the outgoing data and hence the better option for systems that provide their services to the open network internet. This is also efficient in controlling the kind of traffic that originates from the public networks that could possibly gain entry into a private network (Oppliger, 1997, p. 93). The data that does not conform to the rules is logged off by design. The inclusive firewalls are hence safer since they considerably decrease the risk of permitting unwanted data to get in.

Tightening the security can be made possible through the use of the strong firewall (Oppliger, 1997, p. 93). This type helps to track the connections that are opened via firewall and discriminately permit data through when it matches the existing connection or creates a new opening. Nonetheless, this has a major disadvantaged of being susceptible to service denial attacks when there are several new connections that are opened in a very short time or very fast (Anderson et al, 2004. p. 39). Many firewalls allow combined use of strong and other operations to create most advantageous firewall for the specific site.

Global defense infrastructure- when a system cab eve deployed with a global defense system, then this can effectively prevent attack from DDoS by using the filtering rules in the pertinent routers of the internet (Anderson et al, 2004. p. 39). As the internet is administered by several autonomous systems depending on their own local security policy, like global defense, this is possible in theory.

IP hoping can be sued to prevent DDoS attacks. When the IP address or the location is change, then and active servers invalidates the victims IP address and replaced with another one. As the IP address changes, every one of the internet routers will be notified of the change. The edge routers will then discontinue the attacking packets. This technique has been shown to be very practical in preventing IP-address based attacks. However, attacks can still occur through the new IP addresses (Park & Lee, 2001, p.16). On the other hand, these techniques can at times be rendered useless. This happens when attackers put in a domain name service that identifies the function to the DDoS attack tool.

There are basically three categories of firewalls in use. The different varieties are made to address different needs of various users.

Application Gateways

These are the first firewalls and are also known as alternative gateways. They are comprised of bastion hosts that operate unique software to function as the proxy server. The software operates as the application level of ISO reference system and therefore derived the name from this fact. Users behind firewall therefore have to known how to operate proxy for them to make use of the internet services (Park & Lee, 2001, p. 16). Conventionally, they have been very secure since they do not permit anything to go through by mistake (by default), but they require programs designed and turned on for them to start allowing data to pass (Oppliger, 1997, p. 93). It is also pertinent to note that these firewalls are also the slowest, as they have to start many processes for them to be able to carry out the requested service.

Bearing in mind that the operation is at lower overhead and that the filtering is not done by routers (special design computers intended for networking duty), the packet filtering is usually quicker that later firewall (Oppliger, 1997, p. 93).

Hybrid System

In an effort to combine application security gateway with the fluid and faster packet-filtering model, some experts have devised a system that uses aspects of both models. In a number of these new models, several connections have to be authenticated and supported at the level of application. Once accomplished, the remaining connection is moved to the succeeding layer where the packet filters monitor the connections to make sure that data belonging only to the outgoing (authentic and permitted) conversations are allowed to pass (Moore et al, 2006, p. 116). Other alternatives include use of both proxies. The advantage here includes offering a measure of safety against systems that offer services to the internet and also offer security to application level to the local network. In addition, this model, an intruder cannot get in the local network unless by breaking the access router and other barriers like bastion host (Laurens et al 2009. p. 291).

Filtering

Filtering is a good process of cutting out intruders and ensuring that attacks are not successful. The first step is the Ingress/Egress filtering. Ingress filtering role is to restrict. This is the point whereby traffic is discarded because it utilizes an IP address that is not the same as a domain prefix linked to the ingress router. Egress filtering on the other hand is an out-band filter, that makes sure that the only allocated or dispersed IP address space moves out of the network (Moore et al, 2006, p. 116). It is vital to be conversant with the IP address in a given port. However, it is had to acquire this knowledge in certain networks characterized by complex topologies.

The reverse path filter technique can help in acquiring this knowledge. In this technique, a router is conversant with the network. The router is accessible via various interfaces (Moore et al, 2006, p. 117). It is possible to determine whether the same return path would be used by evaluating the source address. If that is the case, then the packets are allowed while if otherwise, then they are dropped. Not only are two filters related, the number of port and protocol type are also important criteria (Laurens et al 2009. p. 291). Both egress and ingress offer some chances to stifle the attack strength of the DoS. Nonetheless, it is hard to set out ingress or egress filter at a across the world. In case the attack is able to carefully select a network without ingress or egress filter to set out a spoofed attack, then this DoS attack cannot be easily detected (Kejie et al, 2007, p, 5036). Furthermore, if an attack is able to spoof an IP address inside the subnet, then this can as well be never detected (Moore et al, 2006, p. 119). Recently DDoS attackers are able to launch their attacks without necessarily using source address spoof to be efficient. Just by exploring numerous hosts that are compromised, attackers do not require spoofing to exploit the protocol susceptibilities or to cover their location (Kejie et al, 2007, p, 5036).

Packet Filter

Packet Filter is technique that turns on the Access Control Lists (ACL’s) on the routers. In normal usage, routers just allow data through them without any restrictions (Park & Lee, 2001, p. 16). When ACLs are employed, security is enforced. This is done with regard to the types of contact that permit access to the public internet or vice versa. There is little overhead in packet filtering compared to use of application gateways since aspect-controlling access is carried out at a lower ISO/OSI. This is characteristically the session/transport layer.

Router-based filter

These are packet-filtering techniques. This technique was proposed by Park and Lee and the technology extends the process of ingress filter and utilizes the route information to sieve the IP packets that have been spoofed. It is founded on the principle that forever connection in the center of the internet, only a narrow set of source addresses could be the source of the traffic.

In case of an IP packet that is unexpected appearing on the link, then an assumption is made that the source address is already spoofed and therefore these packets can be sorted out. RPF explores information about BGP router typology to filter traffic (Kejie et al, 2007, p, 5036). This is consequential of source addresses that have been previously spoofed. Simulation results then indicate that a considerable amount of the spoofed addresses can be filtered incase RPF is executed in a minimum of 18 percent of AS in the internet. Nonetheless, there are some limitations for this design (Peng et al, 2003, p. 483). The first restriction relate to the execution of RPF in practical use. Considering the fact that there are over 10,000 Ass in the internet, it is a requirement that the RPF be implemented at approximately 1800 Ass in order for it to be effective. This is a very demanding task as it is onerous in accomplishing. Incase of a change in route, some authentic packets could be dropped by the RPF. The third possible limitation is that there is too much reliance on the validity of BGP, messages for configuration of the filter. In case an attacker is able to takeover sessions of and manage to distribute some spurious BGP messages, then it is likely to, mislead border routers to changing the filtering instructions favoring the attacker. RPF is efficient in dealing with the randomly spoofed attacks (Peng et al, 2003, p. 483). Nonetheless, the filtering granularity of it is minimal. It is possible for the attack traffic avoiding these filters (RPF) via effective IP address selection during spoofing. Therefore, RPF is not efficient in dealing with the DDoS attacks.

IP filtering based on the history

In general, the set of the IP address sources that are observed during a normal operation is likely to remain steady. Contrastingly, during that DoS attack, a good number of the IP addresses have not been experienced before (Peng et al, 2003, p. 483). The ideas above have been relied on and use IP destination through a router. The major advantage that can be achieved by this type of architecture is that controlling traffic can be by the destination according to its own policy, chances of attacks are greatly reduced by that virtue. However, the systems provider better protection of the established networks but responsible for producing a new attack type referred to as denial of Capacity, which stops new capacity set up packet from getting to their destination, And limits the value of the systems(Park & Lee, 2001, p. 16). Additionally, these systems are more complex, they need a lot of space and they suffer computational issues.

Capability based technique offers destination that is away for managing the traffic directed towards it. This approach begins by presenting source first by sending that request packets to the destination. The routers are then marked to request the packet whereas going through the router (Peng et al, 2003, p. 485). The destination does not have to necessarily grant consent for the source to launch. When permission is given, then destination returns the ability, if not then the capability in the returned packet. The capabilities in the data packets are sent to the destination through the router.

Secure Overlay Services

This technique described an architecture that offers secure communication between confirmed users and the victims. All data traffic from any source had to undergo verification by the secure overlay access. Authentic traffic will be routed to specific overlay node (Anderson et al, 2004. p. 42). This node is often referred to as the beacon by unanimous users. The beacon forwards the traffic to a special overlay node known as the secret servlet for further authentication. Upon authentication, the data traffic is further forwarded. The secret servlet identify is exposed to the beacon via a secure protocol. This enables it to remain a secret attacker. Finally, the only forwarded traffic by the secret servlet selected by the can make through the parametric routes.

The SOS is responsible of ensuring effective communication amongst victims and legal users upon a possible attack. This tool can effectively reduce the likelihood an attack to succeed (Anderson et al, 2004. p. 42).

SAVE: This protocol ensures effective updating of the information in relation to expected IP address. This occurs on every link thus blocking other unexpected IP packets. The intent of this validity enforcement of the source address is to offer routers with the relevant information concerning the range of IP addresses of the source information to each destination.

General Preventive Measures

Distributed denial of services has come to be the worst nightmare of any business. In just a minute, things that were previously working normally can turn turbulent. The whole infrastructure can be suffering a fake traffic on the internet. Legitimate users usually get locked out of services and this means that attacks cannot do their normal businesses and everything goes to a halt. In the recent past, it has been a real threat on how attackers could easily launch and execute DoS. Computers and software that execute these attacks are easily accessible.

It is very hard to stop DDoS attacks and even the largest companies have found themselves surrendering to extortion and dishing out some money just to avoid the problems. Besides paying, there is no way of stopping a determined attacker. Some measures can be undertaken by the users to reduce the risk of attack. This would handle the organization design in addition to live operations to help the genuine users avoid disruptions. Some of the successful mean of preventing attacks include:

Over-provisioning – in most cases, the DDoS attacks have been found to be true and this makes over-provisioning a brute retaliation. The attackers therefore have to put in enough traffic to be able to overwhelm the system’s capacity. These changes can be decreased by provisioning for more traffic than what is expected for normal works. The success of the attack is mitigated and the impact is checked this way. The best way for network provision is o give ten times the normal expected peak. Rather than providing for the exact network required.

Redundant monitoring- incase up time is very important for the users. Then they must have systems that monitor performance and accessibility of the site. However, the in-house monitoring could be restricted in terms of utility when attacked by the DDoS. In cases the systems is designed to alert users when attacked and its behind the same traffic as the site being monitored, the alert will not possibly reach the user in time. When attacked. It is important as one gets information faster. Reliable alternatives are to have a third party as a monitoring system.

Dumping the logs- generally, the web server logs cannot identify the disparity between a genuine user and attackers. These are both recorded in similar manners. Even when the server is placed correctly and can rise from DDoS attack, when logs pile, the troubles could be worse as the server stops working as a result of logs becoming too much. If logs are preventing the process of recovery, it is better to clear and dump them.

Conclusion

DoS attacks usually result in disruption of the victims resources and as a consequence prevent the legal user from accessing their resources legitimately. The distributed type of DoS is generated by several compromised machines to coordinate attack on an individual or organization. When a certain attack is successfully countered, slight changes takes place and bypasses the defense systems and still effects as successful attack. This paper has address measures that are still far from adequately preventing DDoS on the internet but they can offer some protection. This is to ensure that the information is cannot be accessed, used, modified, destroyed or disrupted by any means by unauthorized individual(s). The aim of security assurance for information is to guarantee confidentiality, discretion, attainability and integrity of authentic information. It is important to keep in mind that machines can be compromised on the internet.

Reference List

Anderson, T., Roscoe, T. & Wetherall, D. (2004). Preventing internet denial-of-service with capabilities. In ACM SIGCOMM Computer Communication Review, 34(1): 39-44.

Douligeris, C. & Mitrokotsa, A. (2004). DDoS attacks and defense mechanisms: classification and state-of-the-art. Computer Networks 44(5): 643-666.

Kejie L. et al, (2007). Robust and efficient detection of DDoS attacks for large-scale internet, computer networks: The International Journal of Computer and Telecommunications Networking, 51(18): 5036-5056.

Laurens, V. et al (2009). Detecting DDoS attack at the source agents. International Journal of Advanced Media and Communication, 3(3):.290-311.

Mirkovic, J. & Reiher, P. (2004). Taxonomy of DDoS attack and DDoS defense mechanisms. In ACM SIGCOMM Computer Communications Review, 34(2): 39-53.

Moore, D., Shannon, C., Brown, D. J., Voelker, G. & Savage, S. (2006). Inferring internet denial-of-service activity. In ACM Transactions on Computer Systems, 24 (2): 115-139.

Oppliger, R. (1997). Internet security: firewall and beyond. In Communications of the ACM, 40(5): 92-102.

Park, K. & Lee, H. (2001). On the effectiveness of router-based packet filtering for distributed DoS, attack prevention in power-law internets. In Proceedings of the ACM SIGCOMM Conference, 15-26.

Peng, T., Leckie, C. & Ramamohanarao, K. (2003). Protection from distributed denial of service attack using history-based IP filtering. In Proceedings of IEEE International Conference on Communications (ICC 2003), Anchorage, AL, USA, 1: 482-486.

Xiao, B., Chen, W. & He, Y. (2006). A novel approach to detecting DDoS attacks at an early stage. In The Journal of Supercomputing, 36(3):.235-248.

Cite this paper

Select style

Reference

DemoEssays. (2022, February 9). Denial of Service Prevention Techniques. https://demoessays.com/denial-of-service-prevention-techniques/

Work Cited

"Denial of Service Prevention Techniques." DemoEssays, 9 Feb. 2022, demoessays.com/denial-of-service-prevention-techniques/.

References

DemoEssays. (2022) 'Denial of Service Prevention Techniques'. 9 February.

References

DemoEssays. 2022. "Denial of Service Prevention Techniques." February 9, 2022. https://demoessays.com/denial-of-service-prevention-techniques/.

1. DemoEssays. "Denial of Service Prevention Techniques." February 9, 2022. https://demoessays.com/denial-of-service-prevention-techniques/.


Bibliography


DemoEssays. "Denial of Service Prevention Techniques." February 9, 2022. https://demoessays.com/denial-of-service-prevention-techniques/.