Using the proper devices and solutions can help you defend your network. Here are the most common types of network security devices that can help you secure your network against external attacks:
A firewall device is one of the first lines of defense in a network because it isolates one network from another. Firewalls can be standalone systems or they can be included in other infrastructure devices, such as routers or servers. You can find both hardware and software firewall solutions; some firewalls are available as appliances that serve as the primary device separating two networks.
Firewalls exclude unwanted and undesirable network traffic from entering the organization’s systems. Depending on the organization’s firewall policy, the firewall may completely disallow some traffic or all traffic, or it may perform a verification on some or all of the traffic. There are two commonly used types of firewall policies:
- Whitelisting — The firewall denies all connections except for those specifically listed as acceptable.
- Blacklisting — The firewall allows all connections except those specifically listed as unacceptable.
There are four types of firewalls: packet-filtering firewalls, stateful packet-filtering firewalls, proxy firewalls and web application firewalls.
A packet-filtering firewall is a primary and simple type of network security firewall. It has filters that compare incoming and outgoing packets against a standard set of rules to decide whether to allow them to pass through. In most cases, the ruleset (sometimes called an access list) is predefined, based on a variety of metrics. Rules can include source/destination IP addresses, source/destination port numbers, and protocols used. Packet filtering occurs at Layer 3 and Layer 4 of the OSI model. Here are the common filtering options:
- The source IP address of the incoming packets — IP packets indicate where they were originated. You can approve or deny traffic by its source IP address. For example, many unauthorized sites or botnets can be blocked based on their IP addresses.
- The destination IP addresses — Destination IP addresses are the intended location of the packet at the receiving end of a transmission. Unicast packets have a single destination IP address and are normally intended for a single machine. Multicast or broadcast packets have a range of destination IP addresses and normally are destined for multiple machines on the network. Rulesets can be devised to block traffic to a particular IP address on the network to lessen the load on the target machine. Such measures can also be used to block unauthorized access to highly confidential machines on internal networks.
- The type of Internet protocols the packet contains — Layer 2 and Layer 3 packets include the type of protocol being used as part of their header structure. These packets can be any of the following types:
- Normal data-carrying IP packet
- Message control packet (ICMP)
- Address resolution packet (ARP)
- Reverse Address Resolution Protocol (RARP)
- Boot-up Protocol (BOOTP)
- Dynamic Host Configuration Protocol (DHCP)
Filtering can be based on the protocol information that the packets carry so you can block traffic that is transmitted by a certain protocol.
The main advantage of packet-filtering firewalls is the speed at which the firewall operations are achieved, because most of the work takes place at Layer 3 or below and complex application-level knowledge is not required. Most often, packet-filtering firewalls are employed at the very periphery of an organization’s security networks. For example, packet-filtering firewalls are highly effective in protecting against denial-of-service (DoS) attacks that aim to take down sensitive systems on internal networks.
However, they have some minuses, too. Because packet-filtering firewalls work at OSI Layer 3 or lower, it is impossible for them to examine application-level data. Therefore, application-specific attacks can easily get into internal sensitive networks. When an attacker spoofs network IP addresses, firewall filters are ineffective at filtering this Layer 3 information. Many packet-filtering firewalls cannot detect spoofed IP or ARP addresses. The main reason for deploying packet-filtering firewalls is to defend against the most general denial-of-service attacks and not against targeted attacks.
Stateful packet-filtering firewall
Stateful packet-filtering techniques use a sophisticated approach, while still retaining the basic abilities of packet-filtering firewalls. The main thing is that they work at Layer 4 and the connection pairs usually consist of these four parameters:
- The source address
- The source port
- The destination address
- The destination port
Stateful inspection techniques employ a dynamic memory that stores the state tables of the incoming and established connections. Any time an external host requests a connection to your internal host, the connection parameters are written to the state tables. As with packet-filtering firewalls, you can create rules to define whether certain packets can pass through. For example, a firewall rule can require dropping packets that contain port numbers higher than 1023, as most servers respond on standard ports numbered from zero to 1023.
Even though stateful packet filtering firewalls do a good job, they are not as flexible or as robust as regular packet-filtering firewalls. Incorporating a dynamic state table and other features into the firewall makes the architecture more complex, which directly slows the speed of operation. This appears to users as a decrease in network performance speed. In addition, stateful packet filtering firewalls cannot completely access higher-layer protocols and application services for inspection.
The difference between stateful packet-filtering firewalls and simple packet-filtering firewalls is that stateful packet filtering tracks the entire conversation, while packet filtering looks at only the current packet. Stateful inspections occur at all levels of the network and provide additional security, especially in connectionless protocols, such as User Datagram Protocol and Internet Control Message Protocol.
Proxy firewalls aim for the Application layer in the OSI model for their operations. Such proxies can be deployed in between a remote user (who might be on a public network such as the internet) and the dedicated server on the internet. All that the remote user discovers is the proxy, so he doesn’t know the identity of the server he is actually communicating with. Similarly, the server discovers only the proxy and doesn’t know the true user.
A proxy firewall can be an effective shielding and filtering mechanism between public networks and protected internal or private networks. Because applications are shielded by the proxy and actions take place at the application level, these firewalls are very effective for sensitive applications. Authentication schemes, such as passwords and biometrics, can be set up for accessing the proxies, which fortifies security implementations. This proxy system enables you to set a firewall to accept or reject packets based on addresses, port information and application information. For instance, you can set the firewall to filter out all incoming packets belonging to EXE files, which are often infected with viruses and worms. Proxy firewalls generally keep very detailed logs, including information on the data portions of packets.
The main disadvantage in using application proxy firewalls is speed. Because these firewall activities take place at the application level and involve a large amount of data processing, application proxies are constrained by speed and cost. Nevertheless, application proxies offer some of the best security of all the firewall technologies.
Web application firewall (WAF)
Web application firewalls are built to provide web applications security by applying a set of rules to an HTTP conversation. Because applications are online, they have to keep certain ports open to the internet. This means attackers can try specific website attacks against the application and the associated database, such as cross-site scripting (XSS) and SQL injection.
While proxy firewalls generally protect clients, WAFs protect servers. Another great feature of WAFs is that they detect distributed denial of service (DDoS) attacks in their early stages, absorb the volume of traffic and identify the source of the attack.
Intrusion detection system (IDS)
An IDS enhances cybersecurity by spotting a hacker or malicious software on a network so you can remove it promptly to prevent a breach or other problems, and use the data logged about the event to better defend against similar intrusion incidents in the future. Investing in an IDS that enables you respond to attacks quickly can be far less costly than rectifying the damage from an attack and dealing with the subsequent legal issues.
From time to time, attackers will manage to compromise other security measures, such as cryptography, firewalls and so on. It is crucial that information about these compromises immediately flow to administrators — which can be easily accomplished using an intrusion detection system.
Deploying an IDS can also help administrators proactively identify vulnerabilities or exploits that a potential attacker could take advantage of. Intrusion detection systems can be grouped into the following categories:
- Host-based IDS
- Network-based IDS
- Intrusion prevention system (IPS)
Host-based intrusion detection systems
Host-based IDSs are designed to monitor, detect and respond to activity and attacks on a given host. In most cases, attackers target specific systems on corporate networks that have confidential information. They will often try to install scanning programs and exploit other vulnerabilities that can record user activity on a particular host. Some host-based IDS tools provide policy management, statistical analytics and data forensics at the host level. Host-based IDSs are best used when an intruder tries to access particular files or other services that reside on the host computer. Because attackers mainly focus on operating system vulnerabilities to break into hosts, in most cases, the host-based IDS is integrated into the operating systems that the host is running.
Network-based intrusion detection systems
Network traffic based IDSs capture network traffic to detect intruders. Most often, these systems work as packet sniffers that read through incoming traffic and use specific metrics to assess whether a network has been compromised. Various internet and other proprietary protocols that handle messages between external and internal networks, such as TCP/IP, NetBEUI and XNS, are vulnerable to attack and require additional ways to detect malicious events. Frequently, intrusion detection systems have difficulty working with encrypted information and traffic from virtual private networks. Speed over 1Gbps is also a constraining factor, although modern and costly network-based IDSs have the capability to work fast over this speed.
Cooperative agents are one of the most important components of a distributed intrusion detection architecture. An agent is an autonomous or semi-autonomous piece of software that runs in the background and performs useful tasks for another. Relative to IDSs, an agent is generally a piece of software that senses intrusions locally and reports attack information to central analysis servers. The cooperative agents can form a network among themselves for data transmission and processing. The use of multiple agents across a network allows a broader view of the network than might be possible with a single IDS or centralized IDSs.
Intrusion prevention system (IPS)
An IPS is a network security tool that can not only detect intruders, but also prevent them from successfully launching any known attack. Intrusion prevention systems combine the abilities of firewalls and intrusion detection systems. However, implementing an IPS on an effective scale can be costly, so businesses should carefully assess their IT risks before making the investment. Moreover, some intrusion prevention systems are not as fast and robust as some firewalls and intrusion detection systems, so an IPS might not be an appropriate solution when speed is an absolute requirement.
One important distinction to make is the difference between intrusion prevention and active response. An active response device dynamically reconfigures or alters network or system access controls, session streams or individual packets based on triggers from packet inspection and other detection devices. Active response happens after the event has occurred; thus, a single packet attack will be successful on the first attempt but will be blocked in future attempts; for example, a DDoS attack will be successful on the first packets but will be blocked afterwards. While active response devices are beneficial, this one aspect makes them unsuitable as an overall solution. Network intrusion prevention devices, on the other hand, are typically inline devices on the network that inspect packets and make decisions before forwarding them on to the destination. This type of device has the ability to defend against single packet attacks on the first attempt by blocking or modifying the attack inline. Most important, an IPS must perform packet inspection and analysis at wire speed. Intrusion prevention systems should be performing detailed packet inspection to detect intrusions, including application-layer and zero-day attacks.
System or host intrusion prevention devices are also inline at the operating system level. They have the ability to intercept system calls, file access, memory access, processes and other system functions to prevent attacks. There are several intrusion prevention technologies, including the following:
- System memory and process protection — This type of intrusion prevention strategy resides at the system level. Memory protection consists of a mechanism to prevent a process from corrupting the memory of another process running on the same system. Process protection consists of a mechanism for monitoring process execution, with the ability to kill processes that are suspected of being attacks.
- Inline network devices — This type of intrusion prevention strategy places a network device directly in the path of network communications with the capability to modify and block attack packets as they traverse the device’s interfaces. It acts much like a router or firewall combined with the signature-matching capabilities of an IDS. The detection and response happens in real time before the packet is passed on to the destination network.
- Session sniping — This type of intrusion prevention strategy terminates a TCP session by sending a TCP RST packet to both ends of the connection. When an attempted attack is detected, the TCP RST is sent and the attempted exploit is flushed from the buffers and thus prevented. Note that the TCP RST packets must have the correct sequence and acknowledgement numbers to be effective.
- Gateway interaction devices — This type of intrusion prevention strategy allows a detection device to dynamically interact with network gateway devices such as routers or firewalls. When an attempted attack is detected, the detection device can direct the router or firewall to block the attack.
There are several risks when deploying intrusion prevention technologies. Most notable is the recurring issue of false positives in today’s intrusion detection systems. On some occasions, legitimate traffic will display characteristics similar to malicious traffic. This could be anything from inadvertently matching signatures to uncharacteristically high traffic volume. Even a finely tuned IDS can present false positives when this occurs. When intrusion prevention is involved, false positives can create a denial-of-service (DoS) condition for legitimate traffic. In addition, attackers who discover or suspect the use of intrusion prevention methods can purposely create a DoS attack against legitimate networks and sources by sending attacks with spoofed source IP addresses. A simple mitigation to some DoS conditions is to use a whitelisting policy.
Session sniping system identification is another concern when deploying active response IPSs. When systems terminate sessions with RST packets, an attacker might be able to discover not only that an IPS is involved but also the type of underlying system. Readily available passive operating system identification tools analyze packets to determine the underlying operating system. This type of information might enable an attacker to evade the IPS or direct an attack at the IPS.
Another risk with active response IPSs involves gateway interaction timing and race conditions. In this scenario, a detection device directs a router or firewall to block the attempted attack. However, because of network latency, the attack has already passed the gateway device before it receives this direction from the detection device. A similar situation could occur with a scenario that creates a race condition on the gateway device itself between the attack and the response. In either case, the attack has a high chance of succeeding.
When deploying an IPS, you should carefully monitor and tune your systems and be aware of the risks involved. You should also have an in-depth understanding of your network, its traffic, and both its normal and abnormal characteristics. It is always recommended to run IPS and active response technologies in test mode for a while to thoroughly understand their behavior.
Wireless intrusion prevention and detection system (WIDPS)
A wireless intrusion prevention system (WIPS) is a standalone security device or integrated software application that monitors a wireless LAN network’s radio spectrum for rogue access points and other wireless security threats.
A WIDPS compares the list of MAC addresses of all connected wireless access points on a network against the list of authorized ones and alerts an IT staff when a mismatch is found. To avoid MAC address spoofing, some higher-end WIDPSes like Cisco ones are able to analyze the unique radio frequency signatures that wireless devices generate and block unknown radio fingerprints. When you find the rogue wireless mobile access point, you can suppress its signal by your access points. In addition to providing a layer of security for wireless LANS, WIDPSes are also useful for monitoring network performance and discovering access points with configuration errors. A WIDPS operates at the Data Link layer level of the OSI model.
There are three basic ways to deploy a WIDPS:
- The wireless access point does double duty, providing network traffic with wireless connectivity while periodically scanning for rogue access points.
- A sensor that is built into the authorized access point continually scans radio frequencies, looking for unauthorized access points.
- Sensors are deployed throughout a building to monitor radio frequencies. The sensors forward the data they collect to a centralized server for further analysis, action and archiving. This approach is more expensive because it requires dedicated hardware, but it is also thought to be most effective.
Most WIDPS have these fundamental components:
- Sensors — Monitor the radio spectrum and forward logs back to a central management server.
- Management server — Receives information captured by the sensors and takes appropriate defense actions based on this information.
- Database server — Stores and organizes the information captured by the sensors.
- Console — Provides an interface for administrators to set up and manage the WIDPS.
Unified threat management (UTM)
Unified threat management (UTM) is an approach to information security in which a single hardware or software installation provides multiple security functions (intrusion prevention, antivirus, content filtering and so forth). This contrasts with the traditional method of having point solutions for each security function. UTM simplifies information-security management because the security administrator has a single management and reporting point rather than having to juggle multiple products from different vendors. UTM appliances have quickly gained popularity, partly because the all-in-one approach simplifies installation, configuration and maintenance. Such a setup saves time, money and people when compared to the management of multiple security systems. Here are the features that a UTM can provide:
- Network firewall
- Intrusion detection
- Intrusion prevention
- Gateway anti-virus
- Proxy firewall
- Deep packet inspection
- Web proxy and content filtering
- Data loss prevention (DLP)
- Security information and event management (SIEM)
- Virtual private network (VPN)
- Network tarpit
The disadvantages of combining everything into one include a potential single point of failure and dependence on one vendor. Vendor diversity is considered to be a network security best practice, so you should assess your risks before deploying such an appliance.
Network access control (NAC)
NAC is a network security control device that restricts the availability of network resources to endpoint devices that comply with your security policy. Some NAC solutions can automatically fix non-compliant devices to ensure they are secure before allowing them to access the network. Network access control does a lot to enhance the endpoint security of a network. Before giving access to the network, NAC checks the device’s security settings to ensure that they meet the predefined security policy; for example, it might check whether the host has the latest antivirus software and the latest patches. If the conditions are met, the device is allowed to enter the network. If not, NAC will quarantine the endpoint or connect it to the guest network until the proper security enhancements are made to comply with policy. NAC can use agents to assess the device’s security or it can be agentless.
Proxy servers act as negotiators for requests from client software seeking resources from other servers. A client connects to the proxy server and requests some service (for example, a website); the proxy server evaluates the request and then allows or denies it. Most proxy servers act as forward proxies and are used to retrieve data on behalf of the clients they serve.
If a proxy server is accessible by any user on the internet, then it is said to be an “open” proxy server. A variation is the reverse proxy, also known as a “surrogate.” This is an internal-facing server used as a front-end to control (and protect) access to a server on a private network. The reverse scenario is used for tasks like load-balancing, authentication, decryption and caching — responses from the proxy server are returned as if they came directly from the original server, so the client has no knowledge of the original servers. Web application firewalls (described earlier) can be classified as reverse proxy servers.
Proxies can be transparent or nontransparent. A transparent proxy does not modify the request or response beyond what is required for proxy authentication and identification; in other words, clients need not be aware of the existence of the proxy. A nontransparent proxy modifies the request or response in order to provide some added service to the user agent, such as group annotation services, media type transformation, protocol reduction or anonymity filtering.
In organizations, proxy servers are usually used for traffic filtering (web filters) and performance improvement (load balancers).
Web filters prevent users’ browsers from loading certain pages of particular websites. URL filtering involves blocking websites (or sections of websites) based solely on the URL, restricting access to specified websites and certain web-based applications. This is in contrast to content filtering systems, which block data based on its content rather than from where the data originates. Microsoft, for example, implemented a phishing filter, which acted as a URL filter for their browser, and then replaced it with the SmartScreen filter, which runs in the background and sends the address of the website being visited to the SmartScreen filter server, where it is compared against a list that is maintained of phishing and malware sites. If a match is found, a blocking web page appears and encourages you to not continue.
Web filter appliances have additional technologies to block malicious internet web sites. They have a database of malware sites but also you can create your own list or policy of blocked web sites. You can apply site whitelisting or blacklisting, see every user’s full web site history, inspect cached pages, and even detect the amount of downloaded traffic. Analyzing this information will help you to understand how your users work on the internet and what their interests are, so it can be a great advantage in insider threat prevention.
Network load balancer (NLB)
Load balancers are physical units that direct computers to individual servers in a network based on factors such as server processor utilization, number of connections to a server or overall server performance. Organizations use load balancers to minimize the chance that any particular server will be overwhelmed and to optimize the bandwidth available to each computer in the network.
A load balancer can be implemented as a security software or hardware solution, and it is usually associated with a device — a router, a firewall, a network address translation (NAT) appliance and so on. A load balancer splits the traffic intended for a website into individual requests that are then rotated to redundant servers as they become available. A key issue with load balancers is scheduling — determining how to split up the work and distribute it across servers.
There are several load balancing methods:
- Round-robin — The first client request is sent to the first group of servers, the second is sent to the second, and so on. When it reaches the last group of servers in the list, the load balancer starts over with the first group of servers.
- Affinity — Affinity minimizes response time to clients by using different methods for distributing client requests. It has three types:
- No affinity — NLB does not associate clients with a particular group of servers; every client request can be load balanced to any group of servers.
- Single affinity — NLB associates clients with particular groups of servers by using the client’s IP address. Thus, requests coming from the same client IP address always reach the same group of servers.
- Class C affinity —NLB associates clients with particular groups of servers by using the Class C portion of the client’s IP address. Thus, clients coming from the same Class C address range always access the same group of servers.
- Least connection — This method takes the current server load into consideration. The current request goes to the server that is servicing the least number of active sessions at the current time.
- Agent-based adaptive load balancing — Each server in the pool has an agent that reports on its current load to the load balancer. This real time information is used when deciding which server is best placed to handle a request.
- Chained failover — The order of servers is configured (predefined) in a chain.
- Weighted response time — Response information from a server health check is used to determine which server is responding the fastest at a particular time.
- Software-defined networking — This approach combines information about upper and lower networking layers. This allows information about the status of the servers, the status of the applications running on them, the health of the network infrastructure, and the level of congestion on the network to all play a part in the load balancing decision making.
Network load balancers can have an active-active or active-passive configuration. An active-active configuration means that multiple load balancing servers are working at all times to handle the requests as they come in. An active-passive configuration has one primary server and others are in listening mode, ready to be activated and start splitting the load if the first server becomes overwhelmed.
A mail gateway can be used not only to route mail but to perform other functions as well, such as encryption or, to a more limited scope, DLP. More commonly, spam filters can detect unwanted email and prevent it from getting to a user’s mailbox. Spam filters judge emails based on policies or patterns designed by an organization or vendor. More sophisticated filters use a heuristic approach that attempts to identify spam through suspicious word patterns or word frequency. The filtering is done based on established rules, such as blocking email coming from certain IP addresses, email that contains particular words in the subject line, and the like. Although spam filters are usually used to scan incoming messages, they can also be used to scan outgoing messages to help identify internal PCs that might have contracted a virus.
Antivirus software is one of the most widely adopted security tools by both individuals and organizations. There are different ways antivirus solutions recognize malicious software:
- Based on the existing malware signatures — Signatures are the most popular way to detect malicious code. These signatures are basically the malware’s fingerprints; they are collected into huge databases for use by antivirus scanners. That’s why it is critical that the antivirus application stays up to date — so that the latest signatures are present. Signature-based detection works by looking for a specific set of code or data. Antivirus solutions compare every file, registry key and running program against that list and quarantine anything that matches.
- Using heuristics — A slightly more advanced technique is heuristics. Instead of relying on malware that has been seen in the wild, as signatures do, heuristics tries to identify previously unseen malware. Heuristics detection will scan the file for features frequently seen in malware, such as attempts to access the boot sector, write to an EXE file or delete hard-drive contents. A threshold must be set by the administrators to determine what will trigger malware detection. This threshold must be set just right for heuristics scanning to be effective. Heuristic signatures are the way of monitoring for certain types of “bad” behavior. Every virus has its own specific characteristics. The known characteristics are used to build up defenses against future viruses. Although there are new viruses created and distributed almost every day, the most common viruses in circulation are the copies of the same old ones. Therefore, it makes sense to use the historical facts of viruses and their characteristics to create defenses against future attacks.
- Based on file length — Another method of virus detection is to use file length. Because viruses work by attaching themselves to software as their surrogates, the length of the surrogate software usually increases. Antivirus software compares the length of the original file or software with the length of the file or software whenever it is used. If the two lengths differ, this signals the existence of a virus.
- Based on checksums — A checksum is a value calculated in a file to determine if data has been altered by a virus without increasing file length. Checksums should be used only when it is clear that the file was virus-free the first time a checksum was computed; otherwise, the baseline checksum will be invalid. Virus symptoms usually depend on the type of virus. Remember that symptoms are not unique to any one virus; several viruses can have similar symptoms. Some of the most common symptoms are the following:
- Frequent or unexpected computer reboots
- Sudden size increases in data and software
- File extension change (common with ransomware)
- Disappearance of data files
- Difficulty saving open files
- Shortage of memory
- Presence of strange sounds or text
Antivirus can be a part of endpoint protection systems that provide not only virus protection but DLP, AppLocker, content filtering and other capabilities as well.
There are several ways an attacker can avoid antivirus products. If the attacker’s software is never seen by the antivirus companies, then there will be no code signature and it will not be caught. But it can still be caught by antivirus heuristics technology. Attackers can also avoid being seen by the antivirus program; there are many stealth techniques that can be used to avoid getting scanned.
We’ve described almost all devices that will increase security in your network. Some of them, such as firewalls and antivirus software, are must-have network security devices; others are nice to have. Before implementing any new security device, always perform an IT security risk assessment; it will help you determine whether the investment is worth
Jeff is a Director of Global Solutions Engineering at Netwrix. He is a long-time Netwrix blogger, speaker, and presenter. In the Netwrix blog, Jeff shares lifehacks, tips and tricks that can dramatically improve your system administration experience.