<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Andreas Ziegler</title><link>https://andreasaziegler.github.io/</link><atom:link href="https://andreasaziegler.github.io/index.xml" rel="self" type="application/rss+xml"/><description>Andreas Ziegler</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><copyright>© 2022 Andreas Ziegler</copyright><lastBuildDate>Sat, 01 Jun 2030 13:00:00 +0000</lastBuildDate><item><title>Example Talk</title><link>https://andreasaziegler.github.io/talk/example-talk/</link><pubDate>Sat, 01 Jun 2030 13:00:00 +0000</pubDate><guid>https://andreasaziegler.github.io/talk/example-talk/</guid><description>&lt;div class="alert alert-note">
&lt;div>
Click on the &lt;strong>Slides&lt;/strong> button above to view the built-in slides feature.
&lt;/div>
&lt;/div>
&lt;p>Slides can be added in a few ways:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Create&lt;/strong> slides using Wowchemy&amp;rsquo;s &lt;a href="https://wowchemy.com/docs/managing-content/#create-slides" target="_blank" rel="noopener">&lt;em>Slides&lt;/em>&lt;/a> feature and link using &lt;code>slides&lt;/code> parameter in the front matter of the talk file&lt;/li>
&lt;li>&lt;strong>Upload&lt;/strong> an existing slide deck to &lt;code>static/&lt;/code> and link using &lt;code>url_slides&lt;/code> parameter in the front matter of the talk file&lt;/li>
&lt;li>&lt;strong>Embed&lt;/strong> your slides (e.g. Google Slides) or presentation video on this page using &lt;a href="https://wowchemy.com/docs/writing-markdown-latex/" target="_blank" rel="noopener">shortcodes&lt;/a>.&lt;/li>
&lt;/ul>
&lt;p>Further event details, including &lt;a href="https://wowchemy.com/docs/writing-markdown-latex/" target="_blank" rel="noopener">page elements&lt;/a> such as image galleries, can be added to the body of this page.&lt;/p></description></item><item><title>Event-camera, camera and robot arm calibration</title><link>https://andreasaziegler.github.io/teaching/calibration/</link><pubDate>Sun, 21 Feb 2021 00:00:00 +0000</pubDate><guid>https://andreasaziegler.github.io/teaching/calibration/</guid><description>&lt;h2 id="description">Description&lt;/h2>
&lt;p>The cognitive systems group at the University of Tübingen uses a table tennis robot system to conduct research on various topics around robotics, control, computer vision, machine learning and reinforcement learning. So far the system uses up to five cameras for the perception pipeline. Recently, the group added event-based cameras to the sensor suite.&lt;/p>
&lt;p>Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of μs), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision.&lt;/p>
&lt;p>With this new sensors in place, the questions arises, how the whole system should be calibrated. Eye-to-hand (calibration of the camera and the robot arm) and camera calibration is a well studied topic in the literature. However, since event-based cameras are still relatively new, there is only very little literature on event-based camera calibration, especially for eye-to-hand calibration.&lt;/p>
&lt;p>In a first step, the student should study the state-of-the-art calibration methods relevant for our table tennis robot system. Based on this, the goal of the thesis is to develop new methods which can be used in a calibration toolbox, allowing to calibrate the system in an automatic fashion.&lt;/p>
&lt;h2 id="requirements">Requirements&lt;/h2>
&lt;ul>
&lt;li>Familiar with &amp;ldquo;traditional&amp;rdquo; Computer Vision&lt;/li>
&lt;li>C++ and/or Python&lt;/li>
&lt;/ul></description></item><item><title>Spiking neural network for event-based ball detection</title><link>https://andreasaziegler.github.io/teaching/snn/</link><pubDate>Sun, 21 Feb 2021 00:00:00 +0000</pubDate><guid>https://andreasaziegler.github.io/teaching/snn/</guid><description>&lt;h2 id="description">Description&lt;/h2>
&lt;p>Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of μs), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision.&lt;/p>
&lt;p>So far, most learning approaches applied to event data, convert a batch of events into a tensor and then use conventional CNNs as network. While such approaches achieve state-of-the-art performance, they do not make use of the asynchronous nature of the event data. Spiking Neural Networks (SNNs) on the other hand are bio-inspired networks that can process output from event-based directly. SNNs process information conveyed as temporal spikes rather than numeric values. This makes SNNs an ideal counterpart for event-based cameras.&lt;/p>
&lt;p>The goal of this thesis is to investigate and evaluate how a SNN can be used together with our event-based cameras to detect and track table tennis balls. The Cognitive Systems groups has a table tennis robot system, where the developed ball tracker can be used and compared to other methods.&lt;/p>
&lt;h2 id="requirements">Requirements&lt;/h2>
&lt;ul>
&lt;li>Familiar with &amp;ldquo;traditional&amp;rdquo; Computer Vision&lt;/li>
&lt;li>Deep Learning&lt;/li>
&lt;li>Python&lt;/li>
&lt;/ul></description></item><item><title>Focus on time: dynamic imaging reveals stretch-dependent cell relaxation and nuclear deformation</title><link>https://andreasaziegler.github.io/publication/bj21-horvath/</link><pubDate>Sat, 02 Jan 2021 00:00:00 +0000</pubDate><guid>https://andreasaziegler.github.io/publication/bj21-horvath/</guid><description/></item><item><title>Time-controlled Multichannel Dynamic Traction Imaging of Biaxially Stretched Adherent Cells</title><link>https://andreasaziegler.github.io/publication/biorxiv20-horvath/</link><pubDate>Tue, 03 Mar 2020 00:00:00 +0000</pubDate><guid>https://andreasaziegler.github.io/publication/biorxiv20-horvath/</guid><description/></item><item><title>Exploration Without Global Consistency Using Local Volume Consolidation</title><link>https://andreasaziegler.github.io/publication/isrr19-cieslewski/</link><pubDate>Tue, 03 Sep 2019 00:00:00 +0000</pubDate><guid>https://andreasaziegler.github.io/publication/isrr19-cieslewski/</guid><description/></item><item><title>Slides</title><link>https://andreasaziegler.github.io/slides/example/</link><pubDate>Tue, 05 Feb 2019 00:00:00 +0000</pubDate><guid>https://andreasaziegler.github.io/slides/example/</guid><description>&lt;h1 id="create-slides-in-markdown-with-wowchemy">Create slides in Markdown with Wowchemy&lt;/h1>
&lt;p>&lt;a href="https://wowchemy.com/" target="_blank" rel="noopener">Wowchemy&lt;/a> | &lt;a href="https://owchemy.com/docs/managing-content/#create-slides" target="_blank" rel="noopener">Documentation&lt;/a>&lt;/p>
&lt;hr>
&lt;h2 id="features">Features&lt;/h2>
&lt;ul>
&lt;li>Efficiently write slides in Markdown&lt;/li>
&lt;li>3-in-1: Create, Present, and Publish your slides&lt;/li>
&lt;li>Supports speaker notes&lt;/li>
&lt;li>Mobile friendly slides&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="controls">Controls&lt;/h2>
&lt;ul>
&lt;li>Next: &lt;code>Right Arrow&lt;/code> or &lt;code>Space&lt;/code>&lt;/li>
&lt;li>Previous: &lt;code>Left Arrow&lt;/code>&lt;/li>
&lt;li>Start: &lt;code>Home&lt;/code>&lt;/li>
&lt;li>Finish: &lt;code>End&lt;/code>&lt;/li>
&lt;li>Overview: &lt;code>Esc&lt;/code>&lt;/li>
&lt;li>Speaker notes: &lt;code>S&lt;/code>&lt;/li>
&lt;li>Fullscreen: &lt;code>F&lt;/code>&lt;/li>
&lt;li>Zoom: &lt;code>Alt + Click&lt;/code>&lt;/li>
&lt;li>&lt;a href="https://github.com/hakimel/reveal.js#pdf-export" target="_blank" rel="noopener">PDF Export&lt;/a>: &lt;code>E&lt;/code>&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="code-highlighting">Code Highlighting&lt;/h2>
&lt;p>Inline code: &lt;code>variable&lt;/code>&lt;/p>
&lt;p>Code block:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-python" data-lang="python">&lt;span class="line">&lt;span class="cl">&lt;span class="n">porridge&lt;/span> &lt;span class="o">=&lt;/span> &lt;span class="s2">&amp;#34;blueberry&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="k">if&lt;/span> &lt;span class="n">porridge&lt;/span> &lt;span class="o">==&lt;/span> &lt;span class="s2">&amp;#34;blueberry&amp;#34;&lt;/span>&lt;span class="p">:&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="nb">print&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="s2">&amp;#34;Eating...&amp;#34;&lt;/span>&lt;span class="p">)&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;hr>
&lt;h2 id="math">Math&lt;/h2>
&lt;p>In-line math: $x + y = z$&lt;/p>
&lt;p>Block math:&lt;/p>
&lt;p>$$
f\left( x \right) = ;\frac{{2\left( {x + 4} \right)\left( {x - 4} \right)}}{{\left( {x + 4} \right)\left( {x + 1} \right)}}
$$&lt;/p>
&lt;hr>
&lt;h2 id="fragments">Fragments&lt;/h2>
&lt;p>Make content appear incrementally&lt;/p>
&lt;pre tabindex="0">&lt;code>{{% fragment %}} One {{% /fragment %}}
{{% fragment %}} **Two** {{% /fragment %}}
{{% fragment %}} Three {{% /fragment %}}
&lt;/code>&lt;/pre>&lt;p>Press &lt;code>Space&lt;/code> to play!&lt;/p>
&lt;span class="fragment " >
One
&lt;/span>
&lt;span class="fragment " >
&lt;strong>Two&lt;/strong>
&lt;/span>
&lt;span class="fragment " >
Three
&lt;/span>
&lt;hr>
&lt;p>A fragment can accept two optional parameters:&lt;/p>
&lt;ul>
&lt;li>&lt;code>class&lt;/code>: use a custom style (requires definition in custom CSS)&lt;/li>
&lt;li>&lt;code>weight&lt;/code>: sets the order in which a fragment appears&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="speaker-notes">Speaker Notes&lt;/h2>
&lt;p>Add speaker notes to your presentation&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-markdown" data-lang="markdown">&lt;span class="line">&lt;span class="cl">{{% speaker_note %}}
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="k">-&lt;/span> Only the speaker can read these notes
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="k">-&lt;/span> Press &lt;span class="sb">`S`&lt;/span> key to view
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">{{% /speaker_note %}}
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Press the &lt;code>S&lt;/code> key to view the speaker notes!&lt;/p>
&lt;aside class="notes">
&lt;ul>
&lt;li>Only the speaker can read these notes&lt;/li>
&lt;li>Press &lt;code>S&lt;/code> key to view&lt;/li>
&lt;/ul>
&lt;/aside>
&lt;hr>
&lt;h2 id="themes">Themes&lt;/h2>
&lt;ul>
&lt;li>black: Black background, white text, blue links (default)&lt;/li>
&lt;li>white: White background, black text, blue links&lt;/li>
&lt;li>league: Gray background, white text, blue links&lt;/li>
&lt;li>beige: Beige background, dark text, brown links&lt;/li>
&lt;li>sky: Blue background, thin dark text, blue links&lt;/li>
&lt;/ul>
&lt;hr>
&lt;ul>
&lt;li>night: Black background, thick white text, orange links&lt;/li>
&lt;li>serif: Cappuccino background, gray text, brown links&lt;/li>
&lt;li>simple: White background, black text, blue links&lt;/li>
&lt;li>solarized: Cream-colored background, dark green text, blue links&lt;/li>
&lt;/ul>
&lt;hr>
&lt;section data-noprocess data-shortcode-slide
data-background-image="/media/boards.jpg"
>
&lt;h2 id="custom-slide">Custom Slide&lt;/h2>
&lt;p>Customize the slide style and background&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-markdown" data-lang="markdown">&lt;span class="line">&lt;span class="cl">{{&lt;span class="p">&amp;lt;&lt;/span> &lt;span class="nt">slide&lt;/span> &lt;span class="na">background-image&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="s">&amp;#34;/media/boards.jpg&amp;#34;&lt;/span> &lt;span class="p">&amp;gt;&lt;/span>}}
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">{{&lt;span class="p">&amp;lt;&lt;/span> &lt;span class="nt">slide&lt;/span> &lt;span class="na">background-color&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="s">&amp;#34;#0000FF&amp;#34;&lt;/span> &lt;span class="p">&amp;gt;&lt;/span>}}
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">{{&lt;span class="p">&amp;lt;&lt;/span> &lt;span class="nt">slide&lt;/span> &lt;span class="na">class&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="s">&amp;#34;my-style&amp;#34;&lt;/span> &lt;span class="p">&amp;gt;&lt;/span>}}
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;hr>
&lt;h2 id="custom-css-example">Custom CSS Example&lt;/h2>
&lt;p>Let&amp;rsquo;s make headers navy colored.&lt;/p>
&lt;p>Create &lt;code>assets/css/reveal_custom.css&lt;/code> with:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-css" data-lang="css">&lt;span class="line">&lt;span class="cl">&lt;span class="p">.&lt;/span>&lt;span class="nc">reveal&lt;/span> &lt;span class="nt">section&lt;/span> &lt;span class="nt">h1&lt;/span>&lt;span class="o">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="p">.&lt;/span>&lt;span class="nc">reveal&lt;/span> &lt;span class="nt">section&lt;/span> &lt;span class="nt">h2&lt;/span>&lt;span class="o">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="p">.&lt;/span>&lt;span class="nc">reveal&lt;/span> &lt;span class="nt">section&lt;/span> &lt;span class="nt">h3&lt;/span> &lt;span class="p">{&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="k">color&lt;/span>&lt;span class="p">:&lt;/span> &lt;span class="kc">navy&lt;/span>&lt;span class="p">;&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="p">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;hr>
&lt;h1 id="questions">Questions?&lt;/h1>
&lt;p>&lt;a href="https://github.com/wowchemy/wowchemy-hugo-modules/discussions" target="_blank" rel="noopener">Ask&lt;/a>&lt;/p>
&lt;p>&lt;a href="https://wowchemy.com/docs/managing-content/#create-slides" target="_blank" rel="noopener">Documentation&lt;/a>&lt;/p></description></item><item><title>A Representation for Exploration that is Robust to State Estimate Drift</title><link>https://andreasaziegler.github.io/project/exploration/</link><pubDate>Mon, 30 Apr 2018 00:00:00 +0000</pubDate><guid>https://andreasaziegler.github.io/project/exploration/</guid><description>&lt;h2 id="motivation">Motivation&lt;/h2>
&lt;p>Exploration is a fundamental task in the field of robotics. The goal is to build a map of a previously unknown environment. Two tasks have to be performed repeatedly for exploration: mapping the space the robot so far perceived; and planning where to go next. In this thesis we focus on the first task and the goal is to find a map representation that can deal with noisy state estimates. All so far presented map representations either assume perfect state estimates or need to rebuild the map after optimization.&lt;/p>
&lt;h2 id="approach">Approach&lt;/h2>
&lt;p>To achieve this goal we build our representation with polygons. Our polygons represent the boundary between free known space and occupied space or unknown space and the inside of polygons is implicitly free space. We develop two approaches: The first approach builds a global map with polygons by continuously building the union of the polygon from the current field of view and the polygons of the so far explored space. The second approach works with polygons of the local field of view only. For every local polygon the frontiers, the obstacles and the free space is determined and the robot explores as long frontiers are present in the map.&lt;/p>
&lt;h2 id="result">Result&lt;/h2>
&lt;p>In this thesis we propose a novel representation that can deal with noisy state estimates and does not need to rebuild the map or parts of it, e.g. after a loop closure. By experiments we show that we achieve full coverage of the area to explore with frontier-based exploration, using our proposed representation.&lt;/p></description></item><item><title>Map Fusion for Collaborative UAV SLAM</title><link>https://andreasaziegler.github.io/project/map-fusion/</link><pubDate>Thu, 30 Mar 2017 00:00:00 +0000</pubDate><guid>https://andreasaziegler.github.io/project/map-fusion/</guid><description>&lt;h2 id="motivation">Motivation&lt;/h2>
&lt;p>A correct and accurate common map is crucial for multiple robots to collaboratively performing tasks. In this semester project, an existing multi agent Simultaneous Localisation and Mapping (SLAM) system should be extended to fuse maps of single robots in a way that no false map alignment is guaranteed and that an optimal alignment is achieved by using multiple place matches. Also redundant information, a consequence of the fusion of two maps, should be removed in order to get a good performance of the optimization routines e.g. Bundle Adjustment (BA).&lt;/p>
&lt;h2 id="approach">Approach&lt;/h2>
&lt;p>To achieve the first goal, a new approach is proposed which uses multiple KeyFrame Matchs (KFMs) to fuse two maps. This proposed approach also use a novel idea to spread the KFMs over a bigger area by skipping KeyFrames (KFs) between the detection of KFMs. To remove redundant information, KF culling is performed after all KFMs were detected and before the main map fusion. This way information which were present in both maps appear only once in the fused map and the Pose Graph Optimization (PGO) and the BA does not have unnecessary data to process. To reduce the runtime of the optimization part (PGO and BA), the usage of Powell’s dog leg (DL) non-linear least squares technique instead of the Levenberg-Marquardt (LM) optimization was evaluated and the system was adapted in a way, that the performance is increased.&lt;/p>
&lt;h2 id="result">Result&lt;/h2>
&lt;p>The proposed map fusion approach reduces drift and achieves better accuracy compared to the previous approach. With the implemented KF culling, the number of KFs which the PGO and the BA have to process is decreased significantly and therefore better timing is achieved. The usage of DL non-linear least squares technique reduced the runtime of the optimization furthermore.&lt;/p></description></item><item><title>Robust object tracking in 3D by fusing ultra-wideband and vision</title><link>https://andreasaziegler.github.io/project/object-tracking/</link><pubDate>Wed, 02 Nov 2016 00:00:00 +0000</pubDate><guid>https://andreasaziegler.github.io/project/object-tracking/</guid><description>&lt;h2 id="motivation">Motivation&lt;/h2>
&lt;p>Object tracking is an important part for many applications especially for robotic systems interacting with humans. Problem statement Ultra-wideband (UWB) systems as well as vision based object trackers are widely known and used. Both of the systems have their advantages and disadvantages. UWB systems can provide the location of an object in 3D with an accuracy of approximate 10cm whereas vision based object trackers can only provide the location of an object in 2D pixel coordinates but with a more precise accuracy than UWB systems.&lt;/p>
&lt;h2 id="approach">Approach&lt;/h2>
&lt;p>So why not combine these two sources of information? Exactly this concept should be developed and evaluated in this semester project. The 3D position measured by the UWB system should be fused with the 2D pixel coordinates of a visual object tracker with an Extended Kalman Filter (EKF). A re-detection mechanism for the visual tracker should be implemented in addition to increase the usability as well as the stability of the system.&lt;/p>
&lt;h2 id="result">Result&lt;/h2>
&lt;p>The proposed system shows a significantly better accuracy compared with the 3D positions measured by the UWB system. This proof of concept enables to apply this system to a wide range of applications and also allows further extensions.&lt;/p></description></item><item><title/><link>https://andreasaziegler.github.io/admin/config.yml</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://andreasaziegler.github.io/admin/config.yml</guid><description/></item></channel></rss>