Friday 24 April 2015

Discussion: Researching SDN and zero-day network attacks

The area surrounding SDN and zero-day network attacks is almost limitless, in regards its scope for future developments and improvements. Research to date only scratches the surface of this technology and its possibilities for the world’s future networks. It is my belief that creating a technology that would solve such issues would also have a significant market value. Bringing a project of developing this concept to the next stage would require a team of both researchers and software developers, along with full access to an adequate functional test bed.

To justify this it must be considered that in order to develop this software it must be fully understood, and extensively tested. The threat surface in our network infrastructures is constantly changing and to defend against an unknown threat quantity such as a zero day network attack many different approaches and attack methods need to be examined. It is my belief that SDN has all of the qualities needed to maintain and protect the networks of the future. It is also my belief that after researching the technology SDN also poses many new security obstacles that need to be further addressed.

The centralized controller architecture in my belief is an architecture that may need to be reexamined, as a single point of failure in a corporate network is not an acceptable risk factor. This however, should not be something to deter anyone from SDN it is simply an area that requires more attention in regards securing the environment and creating redundancy were the controller to go down.  During the course of this studying this technology it also came to my attention that the area of zero day attacks could not be defined to a specific attack method and therefore this area alone requires extensive research. It is proposed future projects should be split into three defined research modules.
  
 1)  The first module would be to extensively research past and present zero-day network attacks and endeavor to discover small similarities. There is already an active community carrying out this research in the form of HP’s zero-day initiative which was established in 2005. This community actively reports, records and researches zero-day attacks. In understanding the many different types of zero-day network attacks, similarities in code may not be found but it may uncover similarities in construct. It would then be possible to match these similarities to normal network activity data and it may be possible to identify early warning methods for such attacks.

This type of approach would still not be enough to ensure that zero-day exploits would be discovered in real-time, in fact potential attackers would certainly change their exploits to avoid this detection. This research would however, lead researchers to learn more about how potential hackers operate and therefore give an insight into how they construct and target attacks. All of this information may be useless by itself but combined may paint a picture of attack locations, unique signatures, hardware weakness and possibly many more unforeseen traits. This type of data gathering may unearth a larger scope and highlight unseen trends.

This type of data gathering however will only be of use if gathered on a massive scale, this would require mass collaborating across the board among all parties involved in various research across this area. This type of open collaboration can only lead to more positive outcomes and would also help kindle the open source collaboration that the SDN platform is currently being built on. It is our belief that the only way to fight an unknown threat such as a zero-day network attack is to take such an approach, and by doing so in a shared forum, opens the door for many different outlooks and opinions on the best ways and methods to combat this.


2)      The second module of research would be to extensively test not the capabilities of SDN but its weaknesses. It is not possible to have a technology that will defend and protect against the threats of the future if its own weakness are not first exposed and reinforced. In my opinion the capabilities of SDN far extend the scope of current networking capabilities, however, new threat surfaces are also presented, these new threat surfaces need to be examined and challenged thoroughly before SDN can be extensively rolled out. By first securing the weakness of the technology its strengths can then be accessed. It is my opinion that the open source communities such as the OpenDaylight project will have a far greater chance to conquer these vulnerabilities through mass collaboration and innovation.

The unique way that SDN is designed will allows for much more fluid networking platforms. Corporations and Governments will be able to tailor their networks to meet the demands of their environments both in quality of service and security. Custom applications can be written to meet specific demands for data centers, and cities such as Bristol as it endeavors to make history as the world’s first SDN city. In my opinion the Open Bristol project will be the most interesting project as regards a live research bed especially in regards to security. This project alone may very well test the weaknesses and capabilities of SDN in ways that have not previously been considered and should be closely monitored and heavily documented. It is projects like this that will give researchers an opportunity to test the capabilities of SDN to defend against not only zero-day network attacks but all network related intrusions. In my opinion security professionals and researchers should be allowed full access to this project in a collaborative effort to create the most efficient and powerful networking tools and architectures for the future. To expose these weaknesses now will only allow for a stronger implementation of the architecture as it becomes more mainstream.             

3)      The third and final module for research should be a combination of both modules 1 & 2. Meaning that in order to utilize software defined networking to automate the defense against zero-day network attacks, the two areas must be thoroughly tested and examined first. It is my belief that SDN will provide the answers that are needed in this area, but the path that will lead to this solution must first be thoroughly examined. As previously stated, before SDN can protect the networks of the future it must first gain the ability to protect itself. This ability will only be gained by continuous research and testing in the area. It is my hope that this blog and other articles and papers on the area have opened the door for a more heated and wide spread discussion around the area of zero-day network attacks and SDN. It is also my hope that any future work in the area is carried out in an open and collaborative fashion allowing for many ideas and concepts to be exchanged in order to find fitting solutions.

It is very important to remember that with SDN the world is not limited to a one network fits all implementation, as every architecture can be custom tailored to the needs of that network. This flexibility alone will go a long way to mitigate attacks that were once exploited by attacking set network infrastructures and hardware. By diversifying these future networks there is a layer of complication added that is currently not present. Researchers need to focus on these changing elements to creatively implement and innovative solutions that can be fitted into future networks defense mechanisms. It is my belief that the capabilities of SDN may far exceed what was originally thought of this new architecture, and only future developments will show exactly what the power of SDN has to offer.


After identifying zero-day network attacks as a potential area that can theoretically be irradiated by the arrival of software defined networking, it is hoped that this blog and other discussions and papers on the are have highlighted this topic. It is my hope that if anything this type of discussion and research will open a debate surrounding SDN and zero-day network attacks. It is also my hope that this will highlight the need for more discussion about the vulnerabilities that exist in current SDN architectures. It is vitally important that these weaknesses first be addressed and amended before SDN can be considered as a mainstream opponent to current network infrastructures. A project like Bristol is Open marks a vital keystone in the growth of this area and will be of keen interest as it evolves and take its place in the history books of networking. It will only be as project like this one emerge and are tested by the threats of the outside world, will the true capabilities and weaknesses of SDN will be understood.


It has been identified that this technology has only evolved so quick because of the open source communities that have nurtured and contributed to its development. It is my opinion that this collaboration will be the best way to create sustainable security solutions into the future, as once quoted the journalist Mark Shields, “There is always strength in numbers. The more individuals or organizations that you can rally to your cause, the better”.  This is the type of mentality that needs to be adapted when approaching SDN as the only way to maintain reliability and security is to constantly challenge the capabilities of the technology. It is our belief that there is no better way of doing this than leaving this technology in the hands of open source communities such as the OpenDaylight Project or project Floodlight. These open communities will allow for the innovation and creative thinking that may otherwise be curtailed in a more profit driven environment.

If you would like to add to this discussion feel free to share your opinions below.   

Monday 13 April 2015

Open source SDN controllers

Below is a list of Open source controllers that are widely available from the very first platforms to the current market leaders. On this blog you will find tutorials on how to set up mininet with the OpenDaylight controller if you wish to experiment with that particular controller technology. 

·         NOX
NOX was the first popular Open Flow controller available for download.  It was one of the initial controllers to lead to a move towards SDN,  but like most new technologies it was not widely implemented. There were a number of issues with NOX.  As an early stage technology, these issues mainly centered on the fact it was mostly programmed in C++ and lacked proper documentation of its inner workings.

·         POX
POX was established as a predecessor to the NOX controller and managed to get more traction with it being implemented by a number of SDN developers and engineers. This was mainly due to the fact that POX offered a friendlier API with better documentation. POX also had the advantage of a web-based graphical user interface (GUI) that was written in Python.

·         Beacon
Beacon offered the first really promising open source SDN controller written in JAVA and highly integrated into the Eclipse IDE. This allowed beginner programmers a chance to work and create SDN environments. This was limited, however,  to the creation of star topologies (no loops). Despite this, Beacon had opened the door for more advanced controllers to follow.

·         Floodlight
It was not long after Beacon that the Big Switch managed adaptation of the software came along in the form of the Floodlight controller. Floodlight was built using Apache Ant, a very popular software building tool that allowed for very easy and flexible development of Floodlight. Floodlight gained a lot of popularity and has a wide community allowing for many different features to be created that can be added and tailored to specific environments. Floodlight also makes available both a web-based and JAVA based GUI where most of its functionality can be exposed through a REST API.

·         OpenDaylight
OpenDaylight is considered the most popular and interesting controller available at the moment. It is a Linux Foundation collaborative project that has been highly supported by major industry players such as Cisco and Big Switch. Similarly to Floodlight, OpenDaylight is written in JAVA and includes exposure with a REST API and a web-based GUI. OpenDaylight is actively being updated and the third release Helium SR3 can handle network virtualization, network function virtualization and has the capabilities to be scaled on very large networks. This scale-ability is evident in its being chosen as the technology to create the world’s first software defined city in Bristol England as part of the Bristol is Open project.

Out of all of the above it is my belief that the OpenDaylight controller is the strongest contender to become a breakthrough market leader in this field. The ODL community is very active and you can follow the announcement of exciting new developments such as projects like Bristol is Open on there website.  

 


 

Thursday 9 April 2015

The Fundamentals Of SDN

There are five fundamental traits involved when we look at SDN these consist of plane separation, a simplified device, a centralized controller, network automation and vitalization. These five traits are the fundamentals that SDN is built on.  It is important to fully understand the concept behind these five traits as this will allow us to fully understand the technology itself. If we look at the concept of plane separation first as this is one of the driving factors within SDN this refers to the separating of the forwarding and control planes. The role of the forwarding plane is to forward, drop, consume or replicate an incoming packet. This is done by referencing the address table in the hardware and sending the packet out the correct port. In cases where the packet does not meet a certain criteria as specified by Quality of Service (QoS) filtering or from a buffer overflow condition the packet is dropped.

This rule changes in the event that the hardware receives a multicast packet in this instance the packet must be replicated and then forwarded out different ports. The protocols logic and algorithms for making these decisions that are required to program the forwarding plane are stored in the control plane. The majority of these protocols require a global knowledge of the network that they are operational on. The control plane is responsible for determining how the forwarding table and logic in the data plane are to be programmed or configured.

In tradition networks every device would have its own control plane that would look after the primary functions of running routing and switching protocols so that all distributed forwarding tables on the devices within a network would stay synchronized. The reason for this was to avoid the creation of loops in a network. If we look at the SDN model we can see that the control plane is moved away from the switching device and is relocated to a centralized controller. By doing this we are simplifying devices allowing them to be run by a centralized management system i.e the controller where all of the management and control software is situated. This allows the controller to use high-level policies to govern the network the controller sends primitive instructions to the now simplified devices allowing them to make fast decisions on incoming packets where it is appropriate to do so.

If we consider the centralized software-based controller in SDN in terms of network automation and virtualization and look deeper at this concept we can see that SDN provides an open interface on the controller to allow for automated control of the network. The terms northbound and southbound are often used to describe this automation by distinguishing if the interface is been used to connect applications or devices on the network. To distinguish between the two interfaces the southbound API is used by the controller to program network devices and the northbound API is used by the controller to allow software plug-ins that provide the protocols necessary for the network to run efficiently. This allows the network to react quickly and dynamically to changes in the network and call on different applications depending on what is required like reacting to a network attack in real-time to prevent services been disrupted. One of the key elements of the northbound API is that it allows the software above it to operate without any knowledge of the individual traits of the network devices themselves. This is key to allowing applications to be developed that can work over multiple different vendors hardware even if the specification of the devices differ in their implementation details. This is all aided by the open approach taken with SDN to ensure that applications and protocols are not vendor specific and can run on multiple devices across a network infrastructure.

Now that we have analyzed the makeup of SDN we can break its operation into 3 blocks the centralized controller the SDN devices and the applications. To fully understand this we can break it down further to state that the SDN devices are responsible for forwarding functionality and are responsible for what to do with incoming packets. The SDN devices contain data that indicates the actions to be taken in making these forwarding decisions. The controller has predefined these decisions by associating the data with flows and passing this information onto the devices. This simplified nature allows for quick detection of unusual data patterns if the device does not recognize a flow it sends the packets to the controller for closer inspection, this allows the controller to take a granular look at the packets and determine in real-time if they are malicious or not before directing the device on what action to take or implementing a defense strategic that is programmed to deal with the attack on hand.



Wednesday 8 April 2015

The Evolution of Networking a Brief History

In the early days of computing the idea of computer networking did not exist computers or mainframes as they were known were large structures that took up entire floors of buildings. To transfer any data from these mammoth machines you would need to use a physical media such as magnetic tapes. As mainframes started to evolve they needed new ways in which they could move data these new mediums presented themselves in the form of remote terminal controllers or card readers that operated as subservient devices known as peripherals directly controlled by the mainframe. The first network connections that started to emerge at this point in time were very simple point to point or point to multipoint links. This limited the communications on a network to a small chain of physically connected devices where the mainframe controlled what communications were sent.

Over time these mainframe systems got smaller and more like the computer systems we are all familiar with so as the technologies evolved a new way to connect all of these separate systems to share communications without a mainframe arose. This need for a new method of communication brought about the emergence of the local area network (LAN) and along with it new technologies arrived such as IEEE 802.3 and IEEE 802.5.
The LAN was a shared media network and did not scale well so the solution that was devised to solve this issue was the emergence of bridged networks. The idea of a bridged network was to split the shared media network into separate segments to allow for better aggregation of bandwidth as now not all of the devices would be transmitting at the same time. The bridged network concept was later replaced by switches that allowed for many more improvements such as VLAN implementation and the spanning tree protocol that eliminated loops in a network just to mention a few.     
  

The final layer of communication that was added to these networks was routing, many different routing protocols were developed to allow networks to route traffic outside of a LAN and across the internet. As switches and routers developed so did the programmability of this hardware to deliver more secure and faster communications. If you refer to Fig 01.1 below it illustrates how software starts to play a role as the hardware becomes more efficient as it evolves over the years.


Fig 01.1


Before the emergence of Open Flow the protocol at the heart of SDN researchers were examining new ways to evolve the networks of the future. The earliest work recorded at programmable networks did not involve internet routers or switches but in fact surrounded ATM switches. Fig 01.2 below denotes the earliest technologies in existence that eventually led to the birth of Open Flow the protocol and the emergence of SDN.


Project
Description
Open signaling
Separating the forwarding and control planes in ATM switching (1999)
Active networking
Separating control and programmable switches (late 1990s)

DCAN
Separating the forwarding and control planes in ATM switching (1997)
IP switching
Controlling layer two switches as a layer three routing fabric (late 1990s)
MPLS
Separating control software, establishing semi-static forwarding paths for flows in traditional routers (late 1990’s)
RADIUS, COPS
Using admission control to dynamically provision policy (2010)
Orchestration
Using SNMP and CLI to help automate configuration of networking equipment (2008)
Virtualization Manager
Using plug-ins to perform network reconfiguration to support server virtualization (2011)
ForCES
Separating the forwarding and control planes (2003)
4D
Locating control plane intelligence in a centralized system(2005)
Ethane
Achieving complete enterprise and network access and control using separate forwarding and control planes and utilizing a centralized controller (2007)
Fig 01.2




The two technologies to take note of from Fig 01.2 are Devolved Control of ATM Networks (DCAN) and Open Signaling. As you can see from the above description DCAN and Open Signaling both separated the forwarding and control planes in ATM switches and gave the control to an external device very similar to the controller function in SDN networks. This technology never fully gained the trust of IT Administrators and as a result never became a mainstream technology.  The rest of the technologies in Fig 01.2 all played a part in the steps required to get to where we are today with SDN.  It wasn’t however until the arrival of Open Flow that SDN was actually born the year was 2008 and researchers along with vendors had started to play with the idea of Open Flow. Open Flow was designed to allow researchers to experiment and innovate with protocols in everyday network. This concept was to become a defining change how the industry approached networking. It wasn’t until 2011 that SDN actually started to make an impact on the networking industry as many big named vendors such as Cisco started to implement the Open Flow specification into their products. The Open Flow specification indicates the protocol to be used between the SDN controller and the switch it also specifies the behavior that is expected from the switch.

If we look at this specification in more detail we can break it down into a number of bullet points the basic operation of an Open Flow solution is.
·         The controller populates the flow table entries on the switches
·         The switch examines incoming packets when it identifies a matching flow it carries out the action associated with the flow
·         If the switch cannot find a matching flow it forwards the packet to the controller and waits for further instructions on how to deal with the packet
·         The controller will update the switch with new flow entries as new patterns are identified this allows the switch to deal with these packets locally.
The best resource for information on the Open Flow standard is the Open Networking Foundation (ONF) established in 2011 by Deutsche Telekom, Facebook, Google, Microsoft, Version and Yahoo. One of the most powerful aspects of Open Flow is the fact that it is open meaning researchers can contribute to new methods of network management, operation and control unlike the closed shop model of networking that exists in today’s network’s and as a result has lead to stifled innovation.


One major advantage of having an open source platform for networking is security; it is widely known that open source software tends to be a lot more secure than off the shelf distributions. This is due to the fact that open source can be peer reviewed by anyone interested in the field leading to faster discovery and patching of security issues and weakness before a product is introduced to a working environment. This is the type of innovation that networking has been lacking but with the introduction of SDN this is all starting to change.  


Tuesday 7 April 2015

Software-Defined Networking and zero-day network attacks

In examining software defined networking (SDN) as a possible solution to zero-day network attacks we must first look at zero-day attacks as a separate entity in order to fully understand the concept. Due to a lack of current research we do not know exactly how SDN will stand up to a zero-day attack and if it is possible to automate against them in real time. It is import however to explore what a zero-day attack is and what are the strengths and weakness of SDN and its capabilities to either aid or hinder the future defense of modern day networks. Most known successful zero-day attacks take the form of  polymorphic worms, viruses, Trojans, and other malware. According to Kaur & Singh(2014) “the most effective attacks that avoid detection are polymorphic worms which show distinct behaviors. This includes: complex mutation to evade defenses, multi-vulnerability scanning to identify potential targets and targeted exploitation that launches directed attacks against vulnerable hosts”, and that is just to mention a few of the capabilities that this type of an exploit is capable of.

The majority of these attacks on your average user may cause hardware damage and at most there aim is to try to steal sensitive data or turn the infected machine into a zombie computer that can be used in a denial of service attack (DDoS), however the impact is mostly minimal. The problem arises when these attacks take place on large organizations that hold major information such as banks, social media corporations or resources such as nuclear power etc. If a zero-day attack is successful in this regard then the scope for malicious damage and theft of sensitive information increases significantly. A number of years ago this wasn't as big an issue but now that the world is more connected than every all of a sudden security and networking has become a major issue.

In the past few years researchers have been trying to find ways to make computer networks more programmable. The reason for this is that computer networks are complex and hard to manage most of the hardware used across networks is also proprietary which can sometimes limit the resources of companies when it comes to expansion of a network.
It also limits the types of protocols that can be used on a network and different vendors may also have different security gaps in their network infrastructures that can be exploited so it makes patching against new and emerging threats harder. This is an issue in modern networks as there are many different layers of network infrastructure running many different protocols at all levels so the scope to exploit a flaw either digitally or by gaining physical access to a network remains a large threat. There are some network-management tools on the market that offer a central point for network configuration, however these systems still operate at a level that uses individual protocols, mechanisms and configuration interfaces. This is one of the main reasons that modern day networks suffer from slowed innovation, increased complexity and higher operational costs.

This is where the emergence of SDN as a possible major future player in networking is coming from. The SDN model is a possible way to solve the legacy issues that plague modern day networking. SDN operates by separating the control plane (how traffic is handled) from the data plane (how traffic is forwarded by using decisions made by the control plane). Next SDN consolidates the control plane, so that a single software control program such as (Floodlight or OpenDaylight) has control of multiple data-plane elements. The controller can now exercise direct control over the state of the networks elements such as router, switches, firewalls etc. All of this can be monitored using an application programmed interface (API). The state of the network can now be granularly monitored and distribution of patches and resources can now be centralized. Programs can be written and automatically distributed across the entire network to enforce new polices. This granular nature can also respond in real-time to changes in network traffic and in theory may be the solution to preventing future zero-day attacks. 

In recent years there has been a significant increase in the number of zero-day attacks occurring. (Hammerberg, 2014) notes that “There were more zero-day vulnerabilities discovered in 2013 than in any previous year according to Symantec’s Internet Security Report of 2014”.  This significant increase represented a total of 23 zero-day attacks which indicated a 61% increase in attacks from 2012. Another key statistic highlight by (Hammerberg, 2014) was the fact that the average exploit goes undetected for 312 days. This is a revelation that must warrant serious consideration if a potential attacker carries out a successful breach on a company or individual and is left undetected for 312 days the scope to carry out harmful and unlawful activities is enormous. It can be deducted from this that the current safeguards that are in place are not fit for purpose and need to rapidly change to have a place in the defense procedures of the future. In order for these defenses to change however by using a new technology such as SDN we must first ensure that this new technology is an adequate replacement.

(Sandra Scott-Hayward, Gemma O’Callaghan and Sakir Sezer, 2013) ask the question “As the benefits of network visibility and network device programmability are discussed who exactly will benefit? Will it be the network operator or will it, in fact be the network intruder?” These are questions that may seem obligatory but are extremely significant, in a world where the term cyber-warfare is starting to make news headlines the network defenses of the future must stand up to attacks that could pose significant threats to human life and standards of living. This of course means if SDN were to be a possible solution it must not just work better than the current technology it must work faster and smarter; therefore the decisions made to strengthen the network security infrastructures of the future need to be well thought out and heavily tested.

It is apt to reference the Stuxnet worm the world’s first every cyber-warfare attack when we speak about the possible implications of cyber-warfare and zero-day attacks. This worm used a combination of four-zero day vulnerabilities to target industrial control systems in Iran to slow down there nuclear program. Stuxnet did not cause any human loss of life but it is widely reported that this worm ruined almost one-fifth of Iran’s nuclear centrifuges. Imagine a different scenario a nuclear power plant for instance where the command set of the worm was to overheat a reactor the outcome of an attack like this if successful would be catastrophic. According to Kreutz, Ramos and Verissimo (2013) “An attack similar to Stuxnet, could have dramatic consequences in a highly configurable and programmable network.

 (Scott-Hayward et al, 2013) state that “While security as an advantage of the SDN framework has been recognized, solutions to tackle the challenges of securing the SDN networks are fewer in number.” What we can take from this is that by implementing an SDN network infrastructure we may be able to implement more stringent and granular security features, however the attributes of centralized control associated with the SDN platform may lead to other security issues such as the potential for Denial-of-service (DoS) attacks that would take advantage of this centralized infrastructure.

This concern has been addressed by (Scott-Hayward et al, 2013) when they explain one possible defense technique that could be used to thwart scanning techniques used by attackers to discover vulnerabilities. They state that one defense presented to thwart these attacks is the use of random virtual Internet Protocol (IP) addresses using SDN. This technique uses the OpenFlow controller to manage a pool of virtual IP addresses, which are assigned to hosts within the network, hiding the real IP addresses from the outside world”.  

According to Kreutz et al, (2013) “SDNs bring a very fascinating dilemma: an extremely promising evolution of networking architectures’, versus a dangerous increase in the threat surface”. This again deducts that the possible advantages of SDN may be significant but again the threat that may come with their implementation is also an unknown quantity. One potential danger that (Kreutz et al, 2013) highlights is that anyone who gains access to the servers that host the network access control software have the potential to control the entire network. While this may be another potential problem we must remember that there is always a fit solution. According to Kreutz et al, (2013) there are a number of key solutions that can be used to help secure SDN infrastructures to include “replication, diversity, self-healing mechanisms, dynamic device association, trust between controllers and devices, trust between controllers and apps, security domains, secure components and fast and reliable update and patching”.

The above concepts are currently only recommended possible solutions and the technology still needs to be developed and evolved to facilitate their implementation.
This again opens the debate to the implementation of SDN as a future network infrastructure. According to Kreutz et al, (2013) “the capabilities of SDN actually introduce new fault and attack planes, which open the doors for new threats that did not exist before and were harder to exploit”. This however does in no way mean that SDN is not the future of networking it just means that like the suggestions above we face new challenges in securing the technology which of course can be achieved by implementing and designing safeguards similar to those mentioned.

 If we look at replication of the controller for example this is a very important concept to improve the dependability of a system. The concept would be that the main controller is replicated a number of times along with the applications that run on the controller, this would make it possible to mask failures and to isolate instances of faults or malicious behavior in a network. If we go back and look at a zero-day exploit similar to Stuxnet as it infects the controller unusual network traffic is detected in real time with replication this controller could then be automatically segmented from the network. The replicated controllers would then simply take its place and normal network activity would resume with minimal disruption to network services.


This type of defense does not exist in our current network infrastructures and as we have seen previously most zero-day attacks currently go undetected for 312 days. It can be concluded that SDN will play a major role in the future of networking, it may currently have a number of weakness that need to be addressed but so does our current network infrastructure. As stated by (Kreutz et al, 2013) “by separating the complexity of state distribution from network specification, SDN provides new ways to solve long-standing problems in networking”. The capabilities of SDN to thwart zero-day attacks needs to be a field of research into the future as it may finally be possible to stem such attacks at the root before they have a chance to embed in a network. This research needs to continue to be done in an open and shared forum as is currently happening, by doing this it is insured that the best solutions are brought forward and implemented into the networks of the future.

This future implementation may not be as far away as some may think on Tuesday the 10th of March 2015 the University of Bristol and the Bristol city council announced that Bristol is constructing the world’s first software-defined city. The CTO and managing Director of the initiative named Bristol is Open Paul Wilson is quoted as saying “We want to go beyond ‘smart’ to an open, programmable city with an infrastructure that could be directly programmed and customized.” This will be considered as one of the most significant real world test beds for SDN to date Wilson is also quoted as saying “This is a research and development test bed for any city to learn from, we’re doing work here that could be replicated in cities around the world.”


By generating greater discussion and awareness of SDN with projects like Bristol is Open and looking closely at its pros and cons we can only generate securer infrastructure models and open new avenues of possibilities. This technology can be summed up by stating the potential possibilities of SDN as a major player in the future of the world’s networks is limitless, along with its possible potential to stop zero-day attacks in real-time it can only be assumed that SDN is here to stay.



References:

Sandra Scott-Hayward, Gemma O’Callaghan, Sakir Sezer (2013). Sdn Security: A Survey. Future Networks and Services (SDN4FNS).

David Hammarberg (2014). The Best Defences Against Zero-day Exploits for Various-sized Organizations. SANS Institute InfoSec Reading Room.

Kim Zetter (2014).  Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon. ISBN 0-77043-617-X. ed. American: Crown.

Wired [online]. (2015). Available from: http://www.wired.com/2015/01/german-steel-mill-hack-destruction/. [Accessed 20/01/2015].

Sdx central [online]. (2015) Available from:

Diego Kreutz, Fernando M. V. Ramos, Paula Verissimo (2013). Towards Secure and Dependable Software-Defined Networks. ISBN: 978-1-4503-2178-5.

Leyla Bilge, Tudor Dumitras (2012). Before we knew it: an empirical study of  zero-day attacks in the real world. ISBN: 978-1-4503-1651-4.

             

Setting up OpenDayLight Controller and linking it to mininet

Prerequests:
-VMWare Workstation [recommended] or
-Virtual Box [free software not as good but does the job]
-Ubuntu 14.04 32bit OS installed and named Controller or something like that [please have this vm set up by following my blog on setting up mininet from scratch]
-Second OS running Ubuntu 14.04 32bit installed and named Mininet [please have this vm set up by following my blog on setting up mininet from scratch]

Note: you will have to set up the Ubuntu 14.04 virtual machine yourself I recommend 20GB of hard drive space and 4GB of ram if your machine can handle it. Also both machines must be set up with mininet as instructed in my previous blog.

The most straight forward way to set up the OpenDaylight Controller (ODL) is to download the latest pre-built version from there website http://www.opendaylight.org/software/downloads .
I would highly recommend this option for anyone who is new to the world of software-defined networking (SDN). The latest version of the ODL controller available at the moment is Helium-SR3. Make sure that you download the .tar file when using Ubuntu. When the download is complete right click on the file and extract it to your home folder you are now ready to get started. You should have your two vms powered on side by side, make sure that your mininet vm has been set up properly as indicated in blog on how to set up mininet from scratch. I do not want you to run mininet on this machine yet so if it is running type exit into the mininet command prompt. The first machine we are going to work on is the controller vm so you have downloaded and extracted the most recent distribution now lets get it to run.
Launch the terminal and type in

dir

if you have extracted the distribution to your home folder you will see it here.
Next command you need is

cd distribution-karaf-0.2.3-Helium-SR3

now to run the controller type in

./bin/karaf

the controller will come up but it takes about 6 minutes to come online fully.
We now need to find out the ip address of our virtual machine this can be obtained by opening another terminal window don't close the ODL controller command window. In the second terminal type in

ifconfig

and take note of the ip address of the machine you will need this to link mininet to the controller.
The below video will go through the above steps in-case you don't understand anything,




We are now going to leave the controller vm and go to our mininet vm if you have set it up like instructed in the previous blog on mininet this should work just fine.
Now we are going to link mininet to our controller that is running on a separate vm. We have already got the ip address of the machine that our controller is sitting on which in this case is 192.168.195.135 your ip will more than lightly be different so don't forget to check and change accordingly.
In our terminal window enter the following command to link mininet and the controller

sudo mn --topo=tree,3 --mac --switch=ovsk --controller=remote,ip=192.168.195.135

if you have entered this correctly then mininet should be connected we now need to go back to our controller vm and see if we are all good. There is a video below going through the above set up.

Ok so if all of the above is working like in the videos then you are in a good place and you are about to see what mininet looks like when connected to the OpenDaylight Controller. Lets head back over to our controller vm and launch the browser. In the browser type in the following

localhost:8181/dlux/index.html#/login

When the interface for the ODL Dlux comes up the username and password is admin/admin
you are now logged in and you should be able to see your mininet network displayed in the topology tab. There is a video below showing you how to do this but if you have got this far well done and have fun experimenting with mininet and OpenDaylight. Please leave a comment and let me know how you got on an if this blog has been helpful.





Monday 6 April 2015

Wireshark wont open with mininet [fix]

This is a problem that I have encountered, I have been running wireshark and mininet together with no issues then I go back a day or two later and wireshark wont boot up. This can be very frustrating especially since everything was working and now its not. Don't panic as there is a work around to this it seems that mininet can get a little bit muddled sometimes and you just have to clear its mind so to speak.

In order to do this launch your terminal window and we are going to enter the following command

sudo mn -c

This command kills any controllers that may be running in the background.
Next we are going to launch mininet

sudo mn

Inside the mininet command line issue the following command

dump

Now exit mininet by typing

exit

into the mininet command prompt. Now lets try and launch wireshark.

sudo wireshark &

If we have been successful wireshark will launch and all is good again. I have made a little video that you can watch below to guide you through this process. Best of luck.



Setting up Mininet From Scratch Ubuntu 14.04

Prerequisites for this set up:
-VMWare Workstation [recommended] or
-Virtual Box [Free software not as good as VMWare but works just fine]
-Ubuntu 14.04 operating system 32bit link below
[  http://www.ubuntu.com/download/desktop ]
-Internet connection with access to GitHub
-Patience and composure are also required [you cannot download this :)]
___________________________________________________________________________

Software defined networking (SDN) is the new kid on the block when it comes to networking, but to understand it we must first get hands on with the technology. The best way to do this with out going bankrupt is to use the Stanford University SDN emulation software known as mininet. There are many in-depth tutorials on how to install the software and after running into a number of different issues I have decided to let you know how I installed mininet with all of its bells and whistles. Don't forget that http://mininet.org/ is a gold mind of information when it comes to this technology and should be referred to for guidelines and instructions. I will also upload videos on this blog that you can follow step by step with me as I run through an install of mininet from start to finish. Your going to have to set up the virtual machine yourself but this is very straight forward and there are many different tutorial on this so lets go straight to getting mininet up and running.

The first step is to access the setting on the Ubuntu OS and turn off the lock function; this stops the machine locking during an install and knocking out your connection to Github which will happen if the machine locks. 
The next step is to open the terminal and type in the following commands.
sudo apt-get install mininet
This command will install a number of core files for mininet and allow mininet to run you can see to output of this command by referring to Fig 2.1 below.












Fig 2.1

The next step is to kill any controllers that mininet may have activated by issuing the following command you can see the output from this command below in Fig 2.2.
sudo mn –c









Fig 2.2


The next step is to install git so that mininet can be downloaded from Github and create a file structure on our test machine. The output from the following command can be seen in Fig 2.3.
sudo apt-get install git










Fig 2.3

Now that git has installed we need to pull mininet down from GitHub using the following command you can view this output in Fig 2.4 below.
git clone git://github.com/mininet/mininet









Fig 2.4

The next step is to switch mininet to the latest version by entering the below commands you can see this output below in Fig 2.5.
cd mininet
git tag # list available versions
git checkout –b cs244-spring-2012-final










Fig 2.5


The next step is the most important one as we now want to install of the elements that mininet has to offer. If this step is not carried out mininet may not connect properly with the controller and wireshark will not run. To carry out this step enter the below command you may need to cd ..  to get back first.
mininet/util/install.sh –a
if this doesnt work try 
mininet/util/install.sh -a Ubuntu 14.04 trusty i386 Ubuntu
The final step is to run wireshark and to start mininet to do this we must first open a separate terminal to run wireshark in and enter the following command.
sudo wireshark &
In wireshark select lo as the interface to sniff and type of into the filter box and apply it this tells wireshark to sniff open flow packets. We won’t see anything in wireshark until mininet is started to start mininet type the following command.
sudo mn
In order to get traffic flowing and to see it in wireshark inside mininet type.
h1 ping h2
If you refer to Fig 2.6 below you can now see that mininet is running and wireshark is reading the packets.










Fig 2.6

If you want to follow my video tutorial you can view it below it is just all of the above on a clean install of Ubuntu 14.04 from start to finish.




I hit a few bumps during the installation
which is good as you can see the potential issues that can arise so I have left it unedited.
Please note that unfortunately there is no audio but all of the steps can be followed and you can pause and rewind the video as many times as you need to get it right. Best of luck.
Refer to the mininet walk through to learn more about the functionality of the software http://mininet.org/walkthrough/ .