We use the internet to communicate, shop, handle financial transactions, and much more.
Many personal and professional workflows are so dependent on the internet, that they won't work when being offline, and with the pandemic, we are living through, this dependency grew even bigger.
The number of connected \ac{iot} devices is around 10 billion in 2021 and is estimated to be constantly growing over the next years up to 25 billion in 2030~\cite{bib:statista_iot_2020}.
Many of these devices run on outdated software, don't receive any updates, and don't follow general security best practices.
While in 2016 only \SI{77}{\percent} of German households had a broadband connection with a bandwidth of \SI{50}{\mega\bit\per\second} or more, in 2020 it was already \SI{95}{\percent} with more than \SI{50}{\mega\bit\per\second} and \SI{59}{\percent} with at least \SI{1000}{\mega\bit\per\second}~\cite{bib:statista_broadband_2021}\todo{graph as image?}.
This makes them an attractive target for botmasters since they are easy to infect, always online, behind internet connections that are getting faster and faster, and due to their nature as small devices, often without any direct user interaction, an infection can go unnoticed for a long time.
In recent years, \ac{iot} botnets have been responsible for some of the biggest \ac{ddos} attacks ever recorded---creating up to \SI{1}{\tera\bit\per\second} of traffic~\cite{bib:ars_ddos_2016}.
A botnet is a network of infected computers with some means of communication to control the infected systems.
Classic botnets use one or more central coordinating hosts called \ac{c2} servers.
These \ac{c2} servers could use any protocol from \ac{irc} over \ac{http} to Twitter~\cite{bib:pantic_covert_2015} as communication channel with the infected hosts.
Abusive use of infected systems includes several things\todo{things = bad}, \eg{}, \ac{ddos} attacks, banking fraud, as proxies to hide the attacker's identity, send spam emails\dots{}
Analyzing and shutting down a centralized botnet is comparatively easy since every bot knows the IP address, domain name, Twitter handle or \ac{irc} channel the \ac{c2} servers are using.
A coordinated operation with help from law enforcement, hosting providers, domain registrars, and platform providers could shut down or take over the operation by changing how requests are rooted or simply shutting down the controlling servers/accounts.
To complicate take-down attempts, botnet operators came up with a number of ideas: \acp{dga} use pseudorandomly generated domain names to render simple domain blacklist-based approaches ineffective~\cite{bib:antonakakis_dga_2012} or fast-flux \ac{dns}, where a large pool of IP addresses is used assigned randomly to the \ac{c2} domains to prevent IP based blacklisting~\cite{bib:nazario_as_2008}.
A number of botnet operations were shut down like this~\cite{bib:nadji_beheading_2013} and as the defenders upped their game, so did attackers\todo{too informal?}---the idea of \ac{p2p} botnets came up.
The idea is to build a decentralized network without \acp{spof} where the \ac{c2} servers are as shown in \autoref{fig:p2p}.
In a \ac{p2p} botnet, each node in the network knows a number of its neighbors and connects to those, each of these neighbors has a list of neighbors on his own, and so on.
This lack of a \ac{spof} makes \ac{p2p} botnets more resilient to take-down attempts since the communication is not stopped and botmasters can easily rejoin the network and send commands.
The constantly growing damage produced by botnets has many researchers and law enforcement agencies trying to shut down these operations~\cite{bib:nadji_beheading_2013, bib:nadji_still_2017, bib:dittrich_takeover_2012}.
The monetary value of these botnets directly correlates with the amount of effort, botmasters are willing to put into implementing defense mechanisms against take-down attempts.
Some of these countermeasures include deterrence, which limits the number of allowed bots per IP address or subnet to 1; blacklisting, where known crawlers and sensors are blocked from communicating with other bots in the network (mostly IP based); disinformation, when fake bots are placed in the neighborhood lists, which invalidates the data collected by crawlers; and active retaliation like \ac{ddos} attacks against sensors or crawlers~\cite{bib:andriesse_reliable_2015}.
For passive detection, traffic flows are analysed in large amounts of collected network traffic (\eg{} from \acp{isp}).
This has some advantages in that it is not possible for botmasters to detect or prevent data collection of that kind, but it is not trivial to distinguish valid \ac{p2p} application traffic (\eg{} BitTorrent, Skype, cryptocurrencies, \ldots) from \ac{p2p} bots.
\item Large scale network analysis (hard to differentiate from legitimate \ac{p2p} traffic (\eg{} BitTorrent), hard to get data, knowledge of some known bots required)~\cite{bib:zhang_building_2014}
In this case, a subset of the botnet protocol are reimplemented to place pseudo-bots or sensors in the network, which will only communicate with other nodes but won't accept or execute commands to perform malicious actions.
The difference in behaviour from the reference implementation and conspicuous graph properties (\eg{} high \(\deg^{+}\) vs.\ low \(\deg^{-}\)) of these sensors allows botmasters to detect and block the sensor nodes.
The implementation of the concepts of this work will be done as part of \ac{bms}\footnotemark, a monitoring platform for \ac{p2p} botnets described by \citeauthor{bib:bock_poster_2019} in~\cite{bib:bock_poster_2019}.
\Ac{bms} uses a hybrid active approach of crawlers and sensors (reimplementations of the \ac{p2p} protocol of a botnet, that won't perform malicious actions) to collect live data from active botnets.
In an earlier project, I implemented different node ranking algorithms (among others \enquote{PageRank}~\cite{bib:page_pagerank_1998}) to detect sensors and crawlers in a botnet, as described in \citetitle{bib:karuppayah_sensorbuster_2017}.
The goal of this work is to complicate detection mechanisms like this for botmasters by centralizing the coordination of the system's crawlers and sensors, thereby reducing the node's rank for specific graph metrics.
The final result should be as general as possible and not depend on any botnet's specific behaviour, but it assumes, that every \ac{p2p} botnet has some kind of \enquote{getNeighbourList} method in the protocol, that allows other peers to request a list of active nodes to connect to.
The idea for this work is to report newfound nodes back to the \ac{bms} backend first, where the graph of the known network is created, and a sensor is selected, so that the specific ranking algorithm doesn't calculate to a suspiciously high or low value.
If it is not possible, to select a specific sensor so that the monitoring activity stays inconspicuous, the coordinator can do a complete shuffle of all nodes between the sensors to restore the wanted graph properties or warn if more sensors are required to stay undetected.
The improved sensor system should allow new sensors to register themselves and their capabilities (\eg{} bandwidth, geolocation ), so the amount of work can be scaled accordingly between hosts.
Further work might even consider autoscaling the monitoring activity using some kind of cloud computing provider.
If time allows, \ac{bsf}\footnotemark{} will be used to simulate a botnet place sensors in the simulated network and measure the improvement achieved by the coordinated monitoring effort.
\item\mintinline{go}{registerSensor(capabilities)}: Register new sensor with capabilities (which botnet, available bandwidth, \ldots). This is called periodically and used to determine which crawler is still active, when splitting the workload.
\item\mintinline{go}{requestTasks() []PeerTask}: Receive a batch of crawl tasks from the coordinator. The tasks consist of the target peer, if the crawler should start or stop the operation, when it should start and stop monitoring and the frequency.
Load balancing in itself does not help prevent the detection of crawlers but it allows better usage of available resources.
No peer will be crawled by more than one crawler and it allows crawling of bigger botnets where the current approach would reach its limit and could also be worked around with scaling up the machine where the crawler is executed.
Load balancing allows scaling out, which can be more cost-effective.
Using collaborative crawlers, an arbitrarily fast frequency can be achieved without being blacklisted.
With \(L \in\mathbb{N}\) being the frequency limit at which a crawler will be blacklisted, \(F \in\mathbb{N}\) being the crawl frequency that should be achieved.
The amount of crawlers \(C\) required to achieve the frequency \(F\) without being blacklisted and the offset \(O\) between crawlers are defined as
Taking advantage of the \mintinline{go}{StartAt} field from the \mintinline{go}{PeerTask} returned by the \mintinline{go}{requestTasks} primitive above, the crawlers can be scheduled offset by \(O\) at a frequency \(L\) to ensure, the overall requests to each peer are evenly distributed over time.
Given a limit \(L =\SI{5}{\request\per100\second}\), crawling a botnet at \(F =\SI{20}{\request\per100\second}\) requires \(C =\left\lceil\frac{\SI{20}{\request\per100\second}}{\SI{5}{\request\per100\second}}\right\rceil=4\) crawlers.
Those crawlers must be scheduled \(O =\frac{\SI{1}{\request}}{\SI{20}{\request\per100\second}}=\SI{5}{\second}\) apart at a frequency of \(L\) for an even request distribution.
As can be seen in~\autoref{fig:crawler_timeline}, each crawler \(C_0\) to \(C_3\) performs only \SI{5}{\request\per 100\second} while overall achieving \(\SI{20}{\request\per100\second}\).
Vice versa given an amount of crawlers \(C\) and a request limit \(L\), the effective frequency \(F\) can be maximized to \(F = C \times L\) without hitting the limit \(L\) and being blocked.
Using the example from above with \(L =\SI{5}{\request\per100\second}\) but now only two crawlers \(C =2\), it is still possible to achieve an effective frequency of \(F =2\times\SI{5}{\request\per100\second}=\SI{10}{\request\per100\second}\) and \(O =\frac{\SI{1}{\request}}{\SI{10}{\request\per100\second}}=\SI{10}{s}\):
% \caption{Timeline of crawler events as seen from a peer}\label{fig:crawler_timeline}
\end{figure}
%}}} fig:crawler_timeline
While the effective frequency of the whole system is halved compared to~\autoref{fig:crawler_timeline}, it is still possible to double the frequency over the limit.
Building a complete graph \(G_C = K_{\abs{C}}\) between the crawlers by making them return the other crawlers on peer list requests would still produce a disconnected component and while being bigger and maybe not as obvious at first glance, it is still easily detectable since there is no path from \(G_C\) back to the main network (see~\autoref{fig:sensorbuster2} and~\autoref{fig:metrics_table}).
With \(v \in V\), \(\text{succ}(v)\) being the set of successors of \(v\) and \(\text{pred}(v)\) being the set of predecessors of \(v\), PageRank recursively is defined as~\cite{bib:page_pagerank_1998}:
For the first iteration, the PageRank of all nodes is set to the same initial value. When iterating often enough, any value can be chosen~\cite{bib:page_pagerank_1998}.\todo{how often? experiments!}
In our experiments on a snapshot of the Sality~\cite{bib:falliere_sality_2011} botnet exported from \ac{bms} over the span of \daterange{2021-04-22}{2021-04-29}\todo{export timespan}, 3 iterations were enough to get distinct enough values to detect sensors and crawlers.
The dampingFactor describes the probability of a person visiting links on the web to continue doing so, when using PageRank to rank websites in search results.
For simplicity---and since it is not required to model human behaviour for automated crawling and ranking---a dampingFactor of \(1.0\) will be used, which simplifies the formula to
\todo{big graphs, how many Kn to get significant?}
While this works for small networks, the crawlers must account for a significant amount of peers in the network for this change to be noticeable.\todo{for bigger (generated) graphs?}
Detecting if a peer just left the system, in combination with knowledge about \acp{as}, peers that just left and came from an \ac{as} with dynamic IP allocation (\eg{} many consumer broadband providers in the US and Europe), can be placed into the crawler's neighbourhood list.
If the timing of the churn event correlates with IP rotation in the \ac{as}, it can be assumed, that the peer left due to being assigned a new IP address---not due to connectivity issues or going offline---and will not return using the same IP address.
These peers, when placed in the neighbourhood list of the crawlers, will introduce paths back into the main network and defeat the \ac{wcc} metric.
It also helps with the PageRank and SensorRank metrics since the crawlers start to look like regular peers without actually supporting the network by relaying messages or propagating active peers.
Crawlers in \ac{bms} report to the backend using \acp{grpc}\footnote{\url{https://www.grpc.io}}.
Both crawlers and the backend \ac{grpc} server are implemented using the Go\footnote{\url{https://go.dev/}} programming language, so to make use of existing know-how and to allow others to use the implementation in the future, the coordinator backend and crawler abstraction were also implemented in Go.
\Ac{bms} already has an existing abstraction for crawlers.
This implementation is highly optimized but also tightly coupled and grown over time.
The abstraction became leaky and extending it proved to be complicated.
A new crawler abstraction was created with testability, extensibility and most features of the existing implementation in mind, which can be ported back to be used by the existing crawlers.
This is used to implement the bootstrapping mechanism of the old crawler, where once, when the crawler is started, the list of bootstrap nodes is loaded from a textfile.
The \mintinline{go}{PeerTask} instances returned by \mintinline{go}{FindPeer} contain the IP address and port of the peer, if the crawler should start or stop the operation, when to start and stop crawling and in which interval the peer should be crawled.
For each task, a \mintinline{go}{CrawlPeer} and \mintinline{go}{PingPeer} worker is started or stopped as specified in the received \mintinline{go}{PeerTask}.
These tasks use the \mintinline{go}{ReportPeer} interface to report any new peer that is found.
Current report possibilities are \mintinline{go}{LoggingReport} to simply log new peers to get feedback from the crawler at runtime, and \mintinline{go}{BMSReport} which reports back to \ac{bms}.
\mintinline{go}{BatchedReport} delegates a \mintinline{go}{ReportPeer} instance and batch newly found peers up to a specified batch size and only then flush and actually report.
\mintinline{go}{AutoCommitReport} will automatically flush a delegated \mintinline{go}{ReportPeer} instance after a fixed amount of time and is used in combination with \mintinline{go}{BatchedReport} to ensure the batches are written regularly, even if the batch limit is not reached yet.
\mintinline{go}{CombinedReport} works analogous to \mintinline{go}{CombinedFinder} and combines many \mintinline{go}{ReportPeer} instances into one.
\mintinline{go}{PingPeer} and \mintinline{go}{CrawlPeer} use the implementation of the botnet \mintinline{go}{Protocol} to perform the actual crawling in predefined intervals, which can be overwritten on a per \mintinline{go}{PeerTask} basis.
Following this work, it should be possible to rewrite the existing crawlers to use the new abstraction.
This might bring some performance issues to light which can be solved by investigating the optimizations from the old implementation and applying them to the new one.
Another way to expand on this work is automatically scaling the available crawlers up and down, depending on the botnet size and the number of concurrently online peers.
Doing so would allow a constant crawl interval for even highly volatile botnets.