More content

This commit is contained in:
Valentin Brandl 2022-03-08 20:15:29 +01:00
parent a83a4c1c61
commit 73073ded36
9 changed files with 196 additions and 112 deletions

View File

@ -29,7 +29,7 @@ lint:
.PHONY: languagetool .PHONY: languagetool
languagetool: languagetool:
languagetool <(cat $(SRC) | ./scripts/detex.py) languagetool <(cat content.tex | ./scripts/detex-languagetool.py)
.PHONY: clean .PHONY: clean
clean: clean_dot clean_tex clean: clean_dot clean_tex

View File

@ -68,4 +68,14 @@
long = {weakly connected component} long = {weakly connected component}
} }
\DeclareAcronym{as}{
short = {AS},
long = {autonomous system}
}
\DeclareAcronym{grpc}{
short = {gRPC},
long = {gRPC remote procedure call}
}
% vim: set filetype=tex ts=2 sw=2 tw=0 et : % vim: set filetype=tex ts=2 sw=2 tw=0 et :

View File

@ -1,52 +1,31 @@
digraph G { digraph G {
/* splines = false; */ /* splines = false; */
node [ shape = "circle" ]; node [ shape = "circle" ];
/* c0 */
/* c1 */
/* n0 */
/* n1 */
/* n2 */
/* n3 */
/* n0 -> n1; */
/* n0 -> n2; */
/* n0 -> n3; */
/* n1 -> n2; */
/* n1 -> n3; */
/* n2 -> n3; */
/* /1* c0 -> c1; *1/ */
/* /1* c1 -> c0; *1/ */
/* n0 -> c0; */
/* n1 -> c1; */
/* n1 -> c0; */
/* n2 -> c0; */
/* n3 -> c0; */
/* n3 -> c1; */
subgraph cluster0 {
c0 c0
c1 c1
c2 c2
}
subgraph cluster1 {
n0 n0
n1 n1
n2 n2
n0 -> n1; n0 -> n1;
n0 -> n2; n0 -> n2;
n1 -> n2; n1 -> n2;
n1 -> n0;
/* c0 -> c1; */ n2 -> n0;
/* c0 -> c2; */ }
/* c1 -> c0; */
/* c1 -> c2; */
/* c2 -> c0; */ c0 -> n0;
/* c2 -> c1; */ c0 -> n2;
c1 -> n1;
c1 -> n0;
n0 -> c0; c2 -> n0;
n0 -> c2; c2 -> n2;
n1 -> c1;
n1 -> c0;
n2 -> c0;
n2 -> c2;
} }

View File

@ -1,16 +1,10 @@
digraph G { digraph G {
/* splines = false; */ /* splines = false; */
node [ shape = "circle" ]; node [ shape = "circle" ];
subgraph cluster0 {
c0 c0
c1 c1
c2 c2
n0
n1
n2
n0 -> n1;
n0 -> n2;
n1 -> n2;
c0 -> c1; c0 -> c1;
c0 -> c2; c0 -> c2;
@ -18,12 +12,25 @@ digraph G {
c1 -> c2; c1 -> c2;
c2 -> c0; c2 -> c0;
c2 -> c1; c2 -> c1;
}
n0 -> c0; subgraph cluster1 {
n0 -> c2; n0
n1 -> c1; n1
n1 -> c0; n2
n2 -> c0;
n2 -> c2;
n0 -> n1;
n0 -> n2;
n1 -> n2;
n1 -> n0;
n2 -> n0;
}
c0 -> n0;
c0 -> n2;
c1 -> n1;
c1 -> n0;
c2 -> n0;
c2 -> n2;
} }

View File

@ -189,4 +189,20 @@
file = {/home/me/Zotero/storage/R3AAQR9Q/Andriesse et al. - 2013 - Highly resilient peer-to-peer botnets are here An.pdf} file = {/home/me/Zotero/storage/R3AAQR9Q/Andriesse et al. - 2013 - Highly resilient peer-to-peer botnets are here An.pdf}
} }
@inproceedings{stutzbach_churn_2006,
title = {Understanding Churn in Peer-to-Peer Networks},
booktitle = {Proceedings of the 6th {{ACM SIGCOMM}} on {{Internet}} Measurement - {{IMC}} '06},
author = {Stutzbach, Daniel and Rejaie, Reza},
date = {2006},
pages = {189},
publisher = {{ACM Press}},
location = {{Rio de Janeriro, Brazil}},
doi = {10.1145/1177080.1177105},
url = {http://portal.acm.org/citation.cfm?doid=1177080.1177105},
urldate = {2022-03-08},
eventtitle = {The 6th {{ACM SIGCOMM}}},
isbn = {978-1-59593-561-8},
langid = {english}
}
/* vim: set filetype=bib ts=2 sw=2 tw=0 et :*/ /* vim: set filetype=bib ts=2 sw=2 tw=0 et :*/

View File

@ -1,7 +1,7 @@
%{{{ introduction %{{{ introduction
\section{Introduction} \section{Introduction}
The internet has become an irreplaceable part of our day to day lives. The internet has become an irreplaceable part of our day-to-day lives.
We are always connected via numerous \enquote{smart} and \ac{iot} devices. We are always connected via numerous \enquote{smart} and \ac{iot} devices.
We use the internet to communicate, shop, handle financial transactions and much more. We use the internet to communicate, shop, handle financial transactions and much more.
Many personal and professional workflows are so dependent on the internet, that they won't work when being offline. Many personal and professional workflows are so dependent on the internet, that they won't work when being offline.
@ -16,18 +16,17 @@ This makes them an attractive target for botmasters since they are easy to infec
In recent years, \ac{iot} botnets have been responsible for some of the biggest \ac{ddos} attacks ever recorded, creating up to 1 Tbit/s of traffic~\cite{ars_ddos_2016}. In recent years, \ac{iot} botnets have been responsible for some of the biggest \ac{ddos} attacks ever recorded, creating up to 1 Tbit/s of traffic~\cite{ars_ddos_2016}.
% TODO: what is a bot? Infected systems. Malware. DGA, beispiele, tree vs graph \todo{what is a bot? Infected systems. Malware. DGA, beispiele, tree vs graph}
A botnet describes a network of connected computers with some way to control the infected systems. A botnet describes a network of connected computers with some way to control the infected systems.
In classic botnets, there are one or more central coordinating hosts called \ac{c2} servers. In classic botnets, there are one or more central coordinating hosts called \ac{c2} servers.
These \ac{c2} servers could use anything from \ac{irc} over \ac{http} to Twitter as communication channel with the infected systems. These \ac{c2} servers could use anything from \ac{irc} over \ac{http} to Twitter as communication channel with the infected systems.
The infected systems can be abused for a number of things, \eg{} \ac{ddos} attacks, stealing data from victims, as proxies to hide the attackers identity, send spam emails\dots{} The infected systems can be abused for a number of things, \eg{} \ac{ddos} attacks, stealing data from victims, as proxies to hide the attacker's identity, send spam emails\dots{}
Analyzing and shutting down a centralized botnet is comparatively easily since every bot knows the IP address, domain name, Twitter handle or \ac{irc} channel the \ac{c2} servers are using. Analysing and shutting down a centralized botnet is comparatively easily since every bot knows the IP address, domain name, Twitter handle or \ac{irc} channel the \ac{c2} servers are using.
A targeted operation with help from law enforcement, hosting providers, domain registrars and platform providers could shut down or take over the operation by changing how requests are rooted or simply shutting down the controlling servers/accounts. A targeted operation with help from law enforcement, hosting providers, domain registrars and platform providers could shut down or take over the operation by changing how requests are rooted or simply shutting down the controlling servers/accounts.
% TODO: better image for p2p, really needed?
%{{{ fig:c2vsp2p %{{{ fig:c2vsp2p
\begin{figure}[h] \begin{figure}[h]
\centering \centering
@ -43,16 +42,16 @@ A targeted operation with help from law enforcement, hosting providers, domain r
\end{subfigure}% \end{subfigure}%
\caption{Communication paths in different types of botnets}\label{fig:c2vsp2p} \caption{Communication paths in different types of botnets}\label{fig:c2vsp2p}
\end{figure} \end{figure}
\todo{better image for p2p, really needed?}
%}}}fig:c2vsp2p %}}}fig:c2vsp2p
% TODO: too informal? A number of botnet operations were shut down like this and as the defenders upped their game, so did attackers\todo{too informal?} --- the idea of \ac{p2p} botnets came up.
A number of botnet operations were shut down like this and as the defenders upped their game, so did attackers --- the idea of \ac{p2p} botnets came up.
The idea is to build a decentralized network without single points of failure where the \ac{c2} servers are as shown in \autoref{fig:p2p}. The idea is to build a decentralized network without single points of failure where the \ac{c2} servers are as shown in \autoref{fig:p2p}.
In a \ac{p2p} botnet, each node in the network knows a number of it's neighbours and connects to those, each of these neighbours has a list of neighbours on his own, and so on. In a \ac{p2p} botnet, each node in the network knows a number of its neighbours and connects to those, each of these neighbours has a list of neighbours on his own, and so on.
This lack of a \ac{spof} makes \ac{p2p} botnets more resilient to take-down attempts since the communication is not stopped and botmasters can easily rejoin the network and send commands. This lack of a \ac{spof} makes \ac{p2p} botnets more resilient to take-down attempts since the communication is not stopped and botmasters can easily rejoin the network and send commands.
Formally, a \ac{p2p} botnet can be modeled as a digraph Formally, a \ac{p2p} botnet can be modelled as a digraph
\begin{align*} \begin{align*}
G &= (V, E) G &= (V, E)
@ -68,11 +67,11 @@ For a vertex \(v \in V\), the in and out degree \(\deg^{+}\) and \(\deg^{-}\) de
For a vertex \(v \in V\), the in degree \(\deg^{+}(v) = \abs{\{ u \in V \mid (u, v) \in E \}}\) and out degree \(\deg^{-}(v) = \abs{\{ u \in V \mid (v, u) \in E \}}\) describe how many bots know \(v\) and how many nodes \(v\) knows respectively. For a vertex \(v \in V\), the in degree \(\deg^{+}(v) = \abs{\{ u \in V \mid (u, v) \in E \}}\) and out degree \(\deg^{-}(v) = \abs{\{ u \in V \mid (v, u) \in E \}}\) describe how many bots know \(v\) and how many nodes \(v\) knows respectively.
% TODO: source for constantly growing, position in text
% TODO: take-down? take down?
The damage produced by botnets has been constantly growing and there are many researchers and law enforcement agencies trying to shut down these operations. The damage produced by botnets has been constantly growing and there are many researchers and law enforcement agencies trying to shut down these operations.
The monetary value of these botnets directly correlates with the amount of effort, botmasters are willing to put into implementing defense mechanisms against take-down attempts. The monetary value of these botnets directly correlates with the amount of effort, botmasters are willing to put into implementing defense mechanisms against take-down attempts.
Some of these countermeasures include deterrence, which limits the amount of allowed bots per IP address or subnet to 1; blacklisting, where known crawlers and sensors are blocked from communicating with other bots in the network (mostly IP based); disinformation, when fake bots are placed in the neighbourhood lists, which invalidates the data collected by crawlers; and active retaliation like \ac{ddos} attacks against sensors or crawlers~\cite{andriesse_reliable_2015}. Some of these countermeasures include deterrence, which limits the amount of allowed bots per IP address or subnet to 1; blacklisting, where known crawlers and sensors are blocked from communicating with other bots in the network (mostly IP based); disinformation, when fake bots are placed in the neighbourhood lists, which invalidates the data collected by crawlers; and active retaliation like \ac{ddos} attacks against sensors or crawlers~\cite{andriesse_reliable_2015}.
\todo{source for constantly growing, position in text}
\todo{take-down? take down?}
%}}} motivation %}}} motivation
@ -86,18 +85,18 @@ There are two distinct methods to map and get an overview of the network topolog
%{{{ passive detection %{{{ passive detection
\subsubsection{Passive Detection} \subsubsection{Passive Detection}
For passive detection, traffic flows are analyzed in large amounts of collected network traffic (\eg{} from \acp{isp}). For passive detection, traffic flows are analysed in large amounts of collected network traffic (\eg{} from \acp{isp}).
This has some advantages in that it is not possible for botmasters to detect or prevent data collection of that kind but it is not trivial to distinguish valid \ac{p2p} application traffic (\eg{} BitTorrent, Skype, cryptocurrencies, \ldots) from \ac{p2p} bots. This has some advantages in that it is not possible for botmasters to detect or prevent data collection of that kind, but it is not trivial to distinguish valid \ac{p2p} application traffic (\eg{} BitTorrent, Skype, cryptocurrencies, \ldots) from \ac{p2p} bots.
\citeauthor{zhang_building_2014} propose a system of statistical analysis to solve some of these problems in~\cite{zhang_building_2014}. \citeauthor{zhang_building_2014} propose a system of statistical analysis to solve some of these problems in~\cite{zhang_building_2014}.
Also getting access to the required datasets might not be possible for everyone. Also getting access to the required datasets might not be possible for everyone.
% TODO: no context \todo{no context}
\todo{BotGrep (in zhang\_building\_2014)}
\todo{BotMiner (in zhang\_building\_2014)}
\begin{itemize} \begin{itemize}
% TODO: BotGrep (in zhang_building_2014)
\item Large scale network analysis (hard to differentiate from legitimate \ac{p2p} traffic (\eg{} BitTorrent), hard to get data, knowledge of some known bots required)~\cite{zhang_building_2014} \item Large scale network analysis (hard to differentiate from legitimate \ac{p2p} traffic (\eg{} BitTorrent), hard to get data, knowledge of some known bots required)~\cite{zhang_building_2014}
% TODO: BotMiner (in zhang_building_2014)
\item Heuristics: Same traffic patterns, same malicious behaviour \item Heuristics: Same traffic patterns, same malicious behaviour
\end{itemize} \end{itemize}
@ -110,7 +109,7 @@ Also getting access to the required datasets might not be possible for everyone.
In this case, a subset of the botnet protocol are reimplemented to place pseudo-bots or sensors in the network, which will only communicate with other nodes but won't accept or execute commands to perform malicious actions. In this case, a subset of the botnet protocol are reimplemented to place pseudo-bots or sensors in the network, which will only communicate with other nodes but won't accept or execute commands to perform malicious actions.
The difference in behaviour from the reference implementation and conspicuous graph properties (\eg{} high \(\deg^{+}\) vs.\ low \(\deg^{-}\)) of these sensors allows botmasters to detect and block the sensor nodes. The difference in behaviour from the reference implementation and conspicuous graph properties (\eg{} high \(\deg^{+}\) vs.\ low \(\deg^{-}\)) of these sensors allows botmasters to detect and block the sensor nodes.
There are three subtypes auf active detection: There are three subtypes of active detection:
\begin{enumerate} \begin{enumerate}
@ -150,7 +149,6 @@ The implementation of the concepts of this work will be done as part of \ac{bms}
\footnotetext{\url{https://github.com/Telecooperation/BMS}} \footnotetext{\url{https://github.com/Telecooperation/BMS}}
\Ac{bms} uses a hybrid active approach of crawlers and sensors (reimplementations of the \ac{p2p} protocol of a botnet, that won't perform malicious actions) to collect live data from active botnets. \Ac{bms} uses a hybrid active approach of crawlers and sensors (reimplementations of the \ac{p2p} protocol of a botnet, that won't perform malicious actions) to collect live data from active botnets.
% TODO: reference for page rank
In an earlier project, I implemented different node ranking algorithms (among others \enquote{PageRank}~\cite{page_pagerank_1998}) to detect sensors and crawlers in a botnet, as described in \citetitle{karuppayah_sensorbuster_2017}. In an earlier project, I implemented different node ranking algorithms (among others \enquote{PageRank}~\cite{page_pagerank_1998}) to detect sensors and crawlers in a botnet, as described in \citetitle{karuppayah_sensorbuster_2017}.
Both ranking algorithms use the \(\deg^+\) and \(\deg^-\) to weight the nodes. Both ranking algorithms use the \(\deg^+\) and \(\deg^-\) to weight the nodes.
Another way to enumerate candidates for sensors in a \ac{p2p} botnet is to find \acp{wcc} in the graph. Another way to enumerate candidates for sensors in a \ac{p2p} botnet is to find \acp{wcc} in the graph.
@ -159,24 +157,24 @@ Sensors will have few to none outgoing edges, since they don't participate activ
The goal of this work is to complicate detection mechanisms like this for botmasters, by centralizing the coordination of the system's crawlers and sensors, thereby reducing the node's rank for specific graph metrics. The goal of this work is to complicate detection mechanisms like this for botmasters, by centralizing the coordination of the system's crawlers and sensors, thereby reducing the node's rank for specific graph metrics.
The changes should allow the current sensors to use the new abstraction with as few changes as possible to the existing code. The changes should allow the current sensors to use the new abstraction with as few changes as possible to the existing code.
The final result should be as general as possible and not depend on any botnet's specific behaviour but it assumes, that every \ac{p2p} botnet has some kind of \enquote{getNeighbourList} method in the protocol, that allows other peers to request a list of active nodes to connect to. The final result should be as general as possible and not depend on any botnet's specific behaviour, but it assumes, that every \ac{p2p} botnet has some kind of \enquote{getNeighbourList} method in the protocol, that allows other peers to request a list of active nodes to connect to.
In the current implementation, each sensor will itself visit and monitor each new node it finds. In the current implementation, each sensor will itself visit and monitor each new node it finds.
The idea for this work is to report newfound nodes back to the \ac{bms} backend first, where the graph of the known network is created and a sensor is selected, so that the specific ranking algorithm doesn't calculate to a suspiciously high or low value. The idea for this work is to report newfound nodes back to the \ac{bms} backend first, where the graph of the known network is created, and a sensor is selected, so that the specific ranking algorithm doesn't calculate to a suspiciously high or low value.
That sensor will be responsible to monitor the new node. That sensor will be responsible to monitor the new node.
If it is not possible, to select a specific sensor so that the monitoring activity stays inconspicuous, the coordinator can do a complete shuffle of all nodes between the sensors to restore the wanted graph properties or warn if more sensors are required to stay undetected. If it is not possible, to select a specific sensor so that the monitoring activity stays inconspicuous, the coordinator can do a complete shuffle of all nodes between the sensors to restore the wanted graph properties or warn if more sensors are required to stay undetected.
The improved sensor system should allow new sensors to register themselves and their capabilities (\eg{} bandwidth, geolocation, ), so the amount of work can be scaled accordingly between hosts. The improved sensor system should allow new sensors to register themselves and their capabilities (\eg{} bandwidth, geolocation ), so the amount of work can be scaled accordingly between hosts.
Further work might even consider autoscaling the monitoring activity using some kind of cloud computing provider. Further work might even consider autoscaling the monitoring activity using some kind of cloud computing provider.
To validate the result, the old sensor implementation will be compared to the new system using different graph metrics. To validate the result, the old sensor implementation will be compared to the new system using different graph metrics.
% TODO: maybe? \todo{maybe?}
If time allows, \ac{bsf}\footnotemark{} will be used to simulate a botnet place sensors in the simulated network and measure the improvement achieved by the coordinated monitoring effort. If time allows, \ac{bsf}\footnotemark{} will be used to simulate a botnet place sensors in the simulated network and measure the improvement achieved by the coordinated monitoring effort.
\footnotetext{\url{https://github.com/tklab-tud/BSF}} \footnotetext{\url{https://github.com/tklab-tud/BSF}}
% TODO: which botnet? \todo{which botnet?}
As a proof of concept, the coordinated monitoring approach will be implemented and deployed in the (Sality, Mirai, ...)? botnet. As a proof of concept, the coordinated monitoring approach will be implemented and deployed in the (Sality, Mirai, ...)? botnet.
%}}} methodology %}}} methodology
@ -186,17 +184,17 @@ As a proof of concept, the coordinated monitoring approach will be implemented a
The coordination protocol must allow the following operations: The coordination protocol must allow the following operations:
% TODO: Testnet + testnet crawler erweitern um mit complete knowledge zu verifizieren \todo{Testnet + testnet crawler erweitern um mit complete knowledge zu verifizieren}
%{{{ sensor to backend %{{{ sensor to backend
\subsubsection{Sensor to Backend} \subsubsection{Sensor to Backend}
\todo{bestehende session Mechanik verwenden/erweitern}
\todo{failedTries im backend statt eigenem nachrichtentyp: remove?}
\begin{itemize} \begin{itemize}
% TODO: bestehende session Mechanik verwenden/erweitern
\item \mintinline{go}{registerSensor(capabilities)}: Register new sensor with capabilities (which botnet, available bandwidth, \ldots). This is called periodically and used to determine which crawler is still active, when splitting the workload. \item \mintinline{go}{registerSensor(capabilities)}: Register new sensor with capabilities (which botnet, available bandwidth, \ldots). This is called periodically and used to determine which crawler is still active, when splitting the workload.
% TODO: failedTries im backend statt eigenem nachrichtentyp: remove?
\item \mintinline{go}{unreachable(targets)}: \item \mintinline{go}{unreachable(targets)}:
\item \mintinline{go}{requestTasks() []PeerTask}: Receive a batch of crawl tasks from the coordinator. The tasks consist of the target peer, if the crawler should start or stop the operation, when it should start and stop monitoring and the frequency. \item \mintinline{go}{requestTasks() []PeerTask}: Receive a batch of crawl tasks from the coordinator. The tasks consist of the target peer, if the crawler should start or stop the operation, when it should start and stop monitoring and the frequency.
@ -225,12 +223,12 @@ type PeerTask struct {
% TODO: remove? % TODO: remove?
\subsubsection{Backend to Sensor} \subsubsection{Backend to Sensor}
\begin{itemize} % \begin{itemize}
% \item \mintinline{go}{stopCrawling(targets)}: Stop crawling a batch of nodes % % \item \mintinline{go}{stopCrawling(targets)}: Stop crawling a batch of nodes
\end{itemize} % \end{itemize}
%}}} backend to sensor %}}} backend to sensor
@ -246,6 +244,22 @@ type PeerTask struct {
The GameOver Zeus botnet deployed a blacklisting mechanism, where crawlers are blocked based in their request frequency~\cite{andriesse_goz_2013}. The GameOver Zeus botnet deployed a blacklisting mechanism, where crawlers are blocked based in their request frequency~\cite{andriesse_goz_2013}.
In a single crawler approach, the crawler frequency has to be limited to prevent being hitting the request limit. In a single crawler approach, the crawler frequency has to be limited to prevent being hitting the request limit.
%{{{ fig:old_crawler_timeline
\begin{figure}[h]
\centering
\begin{chronology}[10]{0}{100}{0.9\textwidth}
\event{0}{\(C_0\)}
\event{20}{\(C_0\)}
\event{40}{\(C_0\)}
\event{60}{\(C_0\)}
\event{80}{\(C_0\)}
\event{100}{\(C_0\)}
\end{chronology}
\caption{Timeline of crawler events as seen from a peer when crawled by a single crawler}\label{fig:old_crawler_timeline}
\end{figure}
%}}} fig:old_crawler_timeline
Using collaborative crawlers, an arbitrarily fast frequency can be achieved without being blacklisted. Using collaborative crawlers, an arbitrarily fast frequency can be achieved without being blacklisted.
With \(L \in \mathbb{N}\) being the frequency limit at which a crawler will be blacklisted, \(F \in \mathbb{N}\) being the crawl frequency that should be achieved. With \(L \in \mathbb{N}\) being the frequency limit at which a crawler will be blacklisted, \(F \in \mathbb{N}\) being the crawl frequency that should be achieved.
The amount of crawlers \(C\) required to achieve the frequency \(F\) without being blacklisted and the offset \(O\) between crawlers are defined as The amount of crawlers \(C\) required to achieve the frequency \(F\) without being blacklisted and the offset \(O\) between crawlers are defined as
@ -290,7 +304,7 @@ Those crawlers must be scheduled \(O = \frac{1\si{\request}}{20\si{\request\per
\event{75}{\(C_3\)} \event{75}{\(C_3\)}
\event{95}{\(C_3\)} \event{95}{\(C_3\)}
\end{chronology} \end{chronology}
\caption{Timeline of crawler events as seen from a peer}\label{fig:crawler_timeline} \caption{Timeline of crawler events as seen from a peer when crawled by multiple crawlers}\label{fig:crawler_timeline}
\end{figure} \end{figure}
%}}} fig:crawler_timeline %}}} fig:crawler_timeline
@ -328,15 +342,38 @@ While the effective frequency of the whole system is halved compared to~\autoref
%}}} frequency reduction %}}} frequency reduction
%{{{ against graph metrics %{{{ against graph metrics
% TODO: sinnvoll? \todo{sinnvoll?}
\subsection{Working Against Suspicious Graph Metrics} \subsection{Working Against Suspicious Graph Metrics}
\citetitle*{karuppayah_sensorbuster_2017} describes different graph metrics to find sensors in \ac{p2p} botnets. \citetitle*{karuppayah_sensorbuster_2017} describes different graph metrics to find sensors in \ac{p2p} botnets.
One of those, \enquote{SensorBuster} uses \acp{wcc} since crawlers don't have any edges back to the main network in the graph. One of those, \enquote{SensorBuster} uses \acp{wcc} since crawlers don't have any edges back to the main network in the graph.
It would be possible to implement the crawlers so they return other crawlers in their peer list responses but this would still produce a disconnected component and as long as this component is smaller than the main network, it is still easily detectable since there is no path from the crawler component back to the main network.
Building a complete graph \(G_C = K_{\abs{C}}\) between the crawlers by making them return the other crawlers on peer list requests would still produce a disconnected component and while being bigger and maybe not as obvious at first glance, it is still easily detectable since there is no path from \(G_C\) back to the main network (see~\autoref{fig:sensorbuster2} and~\autoref{fig:metrics_table}).
\todo{rank? deg+ - deg-?}
With \(v \in V\), \(\text{rank}(v)\), \(\text{succ}(v)\) being the set of successors of \(v\) and \(\text{pred}(v)\) being the set of predecessors of \(v\), PageRank is defined as~\cite{page_pagerank_1998}:
\[
\text{PR}(v) = \text{dampingFactor} \times \sum\limits_{p \in \text{pred}(v)} \frac{\text{rank}(p)}{\abs{\text{succ}(p)}} + \frac{1 - \text{dampingFactor}}{\abs{V}}
\]
The dampingFactor describes the probability of a person visiting links on the web to continue doing so, when using PageRank to rank websites in search results.
For simplicity, and since it is not required to model human behaviour for automated crawling and ranking, a dampingFactor of \(1.0\) will be used, which simplifies the formula to
\[
\text{PR}(v) = \sum\limits_{p \in \text{pred}(v)} \frac{\text{rank}(p)}{\abs{\text{succ}(p)}}
\]
Based on this, SensorRank is defined as
\[
\text{SR}(v) = \frac{\text{PR}(v)}{\abs{\text{succ}(v)}} \times \frac{\abs{\text{pred}(v)}}{|V|}
\]
\todo{percentage of botnet must be crawlers to make a significant change}
% TODO: caption, label % TODO: caption, label
\begin{figure}[h] \begin{figure}[H]
\centering \centering
\begin{subfigure}[b]{.5\textwidth} \begin{subfigure}[b]{.5\textwidth}
\centering \centering
@ -350,26 +387,55 @@ It would be possible to implement the crawlers so they return other crawlers in
\end{subfigure}% \end{subfigure}%
\caption{Differences in graph metrics}\label{fig:sensorbuster} \caption{Differences in graph metrics}\label{fig:sensorbuster}
\end{figure} \end{figure}
\todo{these examples suck; chose better examples}
% TODO: pagerank, sensorrank calculations Applying SensorRank PageRank once with an initial rank of \(0.25\) once on the example graphs above results in:
\begin{figure}
\todo{pagerank, sensorrank calculations, proper example graphs}
\begin{figure}[H]
\centering \centering
\begin{tabular}{|l|l|l|l|l|} \begin{tabular}{|l|l|l|l|l|l|}
\hline \hline
Node & \(\deg_a^{+}\) & \(\deg_a^{-}\) & \(\deg_b^+\) & \(\deg_b^-\) \\ Node & \(\deg^{+}\) & \(\deg^{-}\) & In \ac{wcc}? & PageRank & SensorRank \\
\hline\hline \hline\hline
n0 & 0 & 4 & 0 & 4 \\ n0 & 0/0 & 4/4 & no & 0.75/0.5625 & 0.3125/0.2344 \\
n1 & 1 & 3 & 1 & 3 \\ n1 & 1/1 & 3/3 & no & 0.25/0.1875 & 0.0417/0.0313 \\
n2 & 2 & 2 & 2 & 2 \\ n2 & 2/2 & 2/2 & no & 0.5/0.375 & 0.3333/0.25 \\
c0 & 3 & 0 & 5 & 2 \\ c0 & 3/5 & 0/2 & yes (1/3) & 0.0/0.125 & 0.0/0.0104 \\
c1 & 1 & 0 & 3 & 2 \\ c1 & 1/3 & 0/2 & yes (1/3) & 0.0/0.125 & 0.0/0.0104 \\
c2 & 2 & 0 & 4 & 2 \\ c2 & 2/4 & 0/2 & yes (1/3) & 0.0/0.125 & 0.0/0.0104 \\
\hline \hline
\end{tabular} \end{tabular}
\caption{Values for metrics from~\autoref{fig:sensorbuster} (a/b)}\label{fig:metrics_table}
\end{figure} \end{figure}
\todo{big graphs, how many Kn to get significant?}
While this works for small networks, the crawlers must account for a significant amount of peers in the network for this change to be noticeable.\todo{for bigger (generated) graphs?}
\subsubsection{Excurs: Churn}
Churn describes the dynamics of peer participation of \ac{p2p} systems, \eg{} join and leave events~\cite{stutzbach_churn_2006}.
Detecting if a peer just left the system, in combination with knowledge about \acp{as}, peers that just left and came from an \ac{as} with dynamic IP allocation (\eg{} many consumer broadband providers in the US and Europe), can be placed into the crawler's neighbourhood list.
If the timing if the churn event correlates with IP rotation in the \ac{as}, it can be assumed, that the peer left due to being assigned a new IP address and not due to connectivity issues or going offline, and will not return using the same IP address.
These peers, when placed in the neighbourhood list of the crawlers, will introduce paths back into the main network and defeat the \ac{wcc} metric.
It also helps with the PageRank and SensorRank metrics since the crawlers start to look like regular peers without actually supporting the network by relaying messages or propagating active peers.
%}}} against graph metrics %}}} against graph metrics
%}}} strategies %}}} strategies
%{{{ implementation
\section{Implementation}
Crawlers in \ac{bms} report to the backend using \acp{grpc}\footnote{\url{https://www.grpc.io}}.
Both crawlers and the backend \ac{grpc} server are implemented using the Go\footnote{\url{https://go.dev/}} programming language, so to make use of existing know-how and to allow others to use the implementation in the future, the coordinator backend and crawler abstraction were also implemented in Go.
\Ac{bms} already has an existing abstraction for crawlers.
This implementation is highly optimized but also tightly coupled and grown over time.
The abstraction became leaky and extending it proved to be complicated.
A new crawler abstraction was created with testability, extensibility and most features of the existing implementation in mind, which can be ported back to be used by the existing crawlers.
%}}} implementation
% vim: set filetype=tex ts=2 sw=2 tw=0 et foldmethod=marker spell : % vim: set filetype=tex ts=2 sw=2 tw=0 et foldmethod=marker spell :

Binary file not shown.

View File

@ -29,6 +29,11 @@ headsepline,
\usepackage{amsfonts} \usepackage{amsfonts}
\usepackage{mathtools} \usepackage{mathtools}
% positioning
\usepackage{float}
\usepackage{todonotes}
% timelines % timelines
\usepackage{chronology} \usepackage{chronology}

View File

@ -18,6 +18,7 @@ let
dejavu dejavu
latexmk latexmk
siunitx siunitx
todonotes
# code listings # code listings
minted minted
@ -49,7 +50,7 @@ pkgs.mkShell {
# language correction # language correction
pkgs.languagetool pkgs.languagetool
# detex script # detex script
pkgs.python pkgs.python3
# make # make
pkgs.gnumake pkgs.gnumake
# PDF viewer # PDF viewer