Ticket #56: schedule.xml

File schedule.xml, 34.3 KB (added by philipp, 5 years ago)

schedule.xml from https://promcon.io/2017-munich/schedule.xml

1<?xml version="1.0" encoding="UTF-8"?>
3  <conference>
4    <title>PromCon 2017</title>
5    <start>2017-08-17</start>
6    <end>2017-08-18</end>
7    <days>2</days>
8    <day_change>00:00</day_change>
9    <timeslot_duration>00:15</timeslot_duration>
10  </conference>
11  <day date="2017-08-17" index="1">
12    <room name="Main room">
13      <event id="2">
14        <date>2017-08-17:00:00+00:00</date>
15        <start>9:15</start>
16        <duration>00:30</duration>
17        <room>Main room</room>
18        <abstract/>
19        <title>Welcome and Introduction</title>
20        <type>talk</type>
21        <abstract/>
22        <released>True</released>
23        <persons>
24          <person id="1001">Julius Volz</person>
25        </persons>
26        <description>
27          Julius opens the conference and gives an overview of the state of Prometheus.
28        </description>
29        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/welcome-and-introduction</full_conf_url>
30        <conf_url>/2017-munich/talks/welcome-and-introduction</conf_url>
31      </event>
32      <event id="3">
33        <date>2017-08-17:00:00+00:00</date>
34        <start>9:30</start>
35        <duration>00:30</duration>
36        <room>Main room</room>
37        <abstract/>
38        <title>Monitoring Cloudflare's Planet-Scale Edge Network with Prometheus</title>
39        <type>talk</type>
40        <abstract/>
41        <released>True</released>
42        <persons>
43          <person id="1002">Matt Bostock</person>
44        </persons>
45        <description>
46          Cloudflare operates a global anycast edge network serving content for 6 million web sites. This talk explains how we monitor our network, how we migrated from Nagios to Prometheus and the architecture we chose to provide maximum reliability for monitoring. We'll also discuss the impact of alert fatigue and how we reduced alert noise by analysing data, making alerts more actionable and alerting on symptoms rather than causes.
48This talk will cover:
50- The challenges of monitoring a high volume, anycast, edge network across 100+ locations
51- The architecture we chose to maximise the reliability of our monitoring
52- Why Prometheus excels as the new industry standard for modern monitoring
53- Approaches for reducing alert noise and alert fatigue
54- Triaging alerts into a ticket system
55- Analysing past alert data for continuous improvement
56- The pain points we endured
57- Effecting change across engineering teams
58        </description>
59        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/monitoring-cloudflares-planet-scale-edge-network-with-prometheus</full_conf_url>
60        <conf_url>/2017-munich/talks/monitoring-cloudflares-planet-scale-edge-network-with-prometheus</conf_url>
61      </event>
62      <event id="4">
63        <date>2017-08-17:00:00+00:00</date>
64        <start>10:00</start>
65        <duration>00:30</duration>
66        <room>Main room</room>
67        <abstract/>
68        <title>Start Your Engines: White Box Monitoring for Your Load Tests</title>
69        <type>talk</type>
70        <abstract/>
71        <released>True</released>
72        <persons>
73          <person id="1003">Alexander Schwartz</person>
74        </persons>
75        <description>
76          You think monitoring is only for production? Wrong: Add a metrics endpoint to your application to get insights during your load tests - and use them for free to monitor production!
78This talk shows how to setup up the load testing tools JMeter and Gatling to push their metrics to Prometheus. It also makes the case to expose metrics as part of core application development instead of treating them as a small add-on before go-live.
79        </description>
80        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/start-your-engines-white-box-monitoring-for-load-tests</full_conf_url>
81        <conf_url>/2017-munich/talks/start-your-engines-white-box-monitoring-for-load-tests</conf_url>
82      </event>
83      <event id="5">
84        <date>2017-08-17:00:00+00:00</date>
85        <start>10:30</start>
86        <duration>00:30</duration>
87        <room>Main room</room>
88        <abstract/>
89        <title>Best Practices and Beastly Pitfalls</title>
90        <type>talk</type>
91        <abstract/>
92        <released>True</released>
93        <persons>
94          <person id="1001">Julius Volz</person>
95        </persons>
96        <description>
97          Julius gives an overview over the most important best practices and most treacherous pitfalls when starting to use Prometheus.
98        </description>
99        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/best-practices-and-beastly-pitfalls</full_conf_url>
100        <conf_url>/2017-munich/talks/best-practices-and-beastly-pitfalls</conf_url>
101      </event>
102      <event id="7">
103        <date>2017-08-17:00:00+00:00</date>
104        <start>11:15</start>
105        <duration>00:30</duration>
106        <room>Main room</room>
107        <abstract/>
108        <title>Prometheus as a (Internal) Service</title>
109        <type>talk</type>
110        <abstract/>
111        <released>True</released>
112        <persons>
113          <person id="1004">Paul Traylor</person>
114        </persons>
115        <description>
116          LINE is a large company with many different development teams. Momentum can be a powerful force within a company so it can take some time, training (including unlearning the old) and evangelizing to get a new system adopted. This talk will give a brief summary of our development team’s monitoring environment (Prometheus, Grafana, Promgen) before going into educating developers on Prometheus adoption and some of the struggles and lessons learned from providing Prometheus as an internal service.
117        </description>
118        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/prometheus-as-a-internal-service</full_conf_url>
119        <conf_url>/2017-munich/talks/prometheus-as-a-internal-service</conf_url>
120      </event>
121      <event id="8">
122        <date>2017-08-17:00:00+00:00</date>
123        <start>11:45</start>
124        <duration>00:30</duration>
125        <room>Main room</room>
126        <abstract/>
127        <title>Grafana and Prometheus</title>
128        <type>talk</type>
129        <abstract/>
130        <released>True</released>
131        <persons>
132          <person id="1005">Carl Bergquist</person>
133        </persons>
134        <description>
135          Being the default dashboard for Prometheus I guess most of you already tried Grafana and created some graphs. But have you tried the table and heat map panels? Did you know that Grafana also offers multiple plugins to visualize and show your Prometheus data? What other tricks and optimizations are there?
137We will also look at the major changes in Grafana since last PromCon and what we currently work on.
138        </description>
139        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/grafana-and-prometheus</full_conf_url>
140        <conf_url>/2017-munich/talks/grafana-and-prometheus</conf_url>
141      </event>
142      <event id="10">
143        <date>2017-08-17:00:00+00:00</date>
144        <start>12:30</start>
145        <duration>00:30</duration>
146        <room>Main room</room>
147        <abstract/>
148        <title>Why We Love Prometheus Even Though I Hate It</title>
149        <type>talk</type>
150        <abstract/>
151        <released>True</released>
152        <persons>
153          <person id="1006">Yaroslav Molochko</person>
154        </persons>
155        <description>
156          At Anchorfree we had about 10 monitoring systems of different types. It was difficult to manage, we had 0 observability, as it was impossible to see whole picture. We decided to go with Prometheus full speed, migrated all the major and minor systems to it. This was not easy move, as we faced with a bunch of problems, fundamental misunderstandings, and resistance.
158In this talk I would like to highlight the problems we faced, how we solved it. Answer why we still hate Prometheus and why we love it with all our hearts at the same time.
159        </description>
160        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/why-we-love-prometheus-even-though-i-hate-it</full_conf_url>
161        <conf_url>/2017-munich/talks/why-we-love-prometheus-even-though-i-hate-it</conf_url>
162      </event>
163      <event id="11">
164        <date>2017-08-17:00:00+00:00</date>
165        <start>13:00</start>
166        <duration>00:30</duration>
167        <room>Main room</room>
168        <abstract/>
169        <title>Analyze Prometheus Metrics like a Data Scientist</title>
170        <type>talk</type>
171        <abstract/>
172        <released>True</released>
173        <persons>
174          <person id="1007">Georg Öttl</person>
175        </persons>
176        <description>
177          Gathering software metrics with Prometheus is great and easy. However, at some point there are too many timeseries to craft handwritten rule based alert systems. In this talk I will show how to export data from the Prometheus HTTP API, show how and what to analyze with open-source tools like R, Python SciPi and describe why DevOps and Whitebox Monitoring fits so great here. As an outlook I will show how to integrate/export timeseries to machine learning services.
178        </description>
179        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/analyze-prometheus-metrics-like-a-data-scientist</full_conf_url>
180        <conf_url>/2017-munich/talks/analyze-prometheus-metrics-like-a-data-scientist</conf_url>
181      </event>
182      <event id="13">
183        <date>2017-08-17:00:00+00:00</date>
184        <start>14:30</start>
185        <duration>00:30</duration>
186        <room>Main room</room>
187        <abstract/>
188        <title>Alertmanager and High Availability</title>
189        <type>talk</type>
190        <abstract/>
191        <released>True</released>
192        <persons>
193          <person id="1008">Frederic Branczyk</person>
194        </persons>
195        <description>
196          The Alertmanager is a highly critical component in the monitoring pipeline.  Operators must trust it to be a reliable component. Thus in the 0.5 release of the Alertmanager a high availability mode has been implemented. This talk will go over some implementation details of the high availability mode as well as highlight what this means for operators using and running the Alertmanager.
197        </description>
198        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/alertmanager-and-high-availability</full_conf_url>
199        <conf_url>/2017-munich/talks/alertmanager-and-high-availability</conf_url>
200      </event>
201      <event id="14">
202        <date>2017-08-17:00:00+00:00</date>
203        <start>15:00</start>
204        <duration>00:30</duration>
205        <room>Main room</room>
206        <abstract/>
207        <title>Play with Prometheus - Journey to Make “testing in Production” More Reliable</title>
208        <type>talk</type>
209        <abstract/>
210        <released>True</released>
211        <persons>
212          <person id="1009">Giovanni Gargiulo</person>
213        </persons>
214        <description>
215          Gilt is a high end fashion e-commerce that in the recent years has moved from a monolithic architecture to 150+ distributed microservices running on AWS.
217In Gilt we have to make sure our website runs smoothly and our customers are getting the best experience we can deliver. For this reason we have to keep an eye on our microservices and make sure they behave as expected.
219We’ve been looking for a long time to way to keep long lasting time series, we could aggregate, query, visualize and use for alerting. After a lot of trial and error, we ended up finding the right pieces of the puzzle: Prometheus, Alertmanager, Push Gateway, and Grafana.
221Our success story went viral in Gilt, and very recently Prometheus and Grafana have been added to the Gilt (and HBC!) techradar. A few teams have already lined up to adopt our Prometheus Stack in production and eventually we will implement a hierarchical Prometheus federation alongside meta-monitoring.
223I would like to share with you our journey, what worked well, the problems we faced, and how we fixed them.
224        </description>
225        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/play-with-prometheus</full_conf_url>
226        <conf_url>/2017-munich/talks/play-with-prometheus</conf_url>
227      </event>
228      <event id="16">
229        <date>2017-08-17:00:00+00:00</date>
230        <start>15:45</start>
231        <duration>00:30</duration>
232        <room>Main room</room>
233        <abstract/>
234        <title>The Uninstrumentable; Getting Apache Spark and Prometheus to Play Nicely</title>
235        <type>talk</type>
236        <abstract/>
237        <released>True</released>
238        <persons>
239          <person id="1010">Dan Rathbone</person>
240          <person id="1011">Joe Stringer</person>
241        </persons>
242        <description>
243          Instrumenting your code with Prometheus is simple and easy. Or so we thought until we tried to instrument our Python application running under Apache Spark… The distributed nature of Spark presents some interesting challenges when it comes to instrumenting your code effectively, for example a lack of global state, transient processes and no control over the execution profile.
245We’ll talk about our myriad failed attempts at instrumenting under Spark and our journey to finally getting something working effectively, without DOSing Prometheus with millions of time series! :)
246        </description>
247        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/the-uninstrumentable-getting-apache-spark-and-prometheus-to-play-nicely</full_conf_url>
248        <conf_url>/2017-munich/talks/the-uninstrumentable-getting-apache-spark-and-prometheus-to-play-nicely</conf_url>
249      </event>
250      <event id="17">
251        <date>2017-08-17:00:00+00:00</date>
252        <start>16:15</start>
253        <duration>00:30</duration>
254        <room>Main room</room>
255        <abstract/>
256        <title>Social Aspects of Change (or How We Stopped Worrying and Learned to Love Prometheus)</title>
257        <type>talk</type>
258        <abstract/>
259        <released>True</released>
260        <persons>
261          <person id="1012">Richard Hartmann</person>
262        </persons>
263        <description>
264          Companies are (still) run by people they need to accept the technology driving the company's business. Introducing the best of technologies will lead to nowhere unless this change is accepted at all levels of the organization. This talk will cover lessons learned when introducing Prometheus into a long-established company with many organically-grown and thus highly-fragmented systems.
265        </description>
266        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/social-aspects-of-change</full_conf_url>
267        <conf_url>/2017-munich/talks/social-aspects-of-change</conf_url>
268      </event>
269      <event id="19">
270        <date>2017-08-17:00:00+00:00</date>
271        <start>17:00</start>
272        <duration>01:00</duration>
273        <room>Main room</room>
274        <abstract/>
275        <title>Lightning Talks</title>
276        <type>talk</type>
277        <abstract/>
278        <released>True</released>
279        <persons/>
280        <description>
281          Lightning talks are 5 minutes each.
282        </description>
283        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/lightning-talks-day1</full_conf_url>
284        <conf_url>/2017-munich/talks/lightning-talks-day1</conf_url>
285      </event>
286    </room>
287    <room name="Elsewhere">
288      <event id="1">
289        <date>2017-08-17:00:00+00:00</date>
290        <start>8:15</start>
291        <duration>01:00</duration>
292        <room>Elsewhere</room>
293        <abstract/>
294        <title>Breakfast and Registration</title>
295        <type>break</type>
296        <abstract/>
297        <released>True</released>
298        <persons/>
299        <description/>
300      </event>
301      <event id="6">
302        <date>2017-08-17:00:00+00:00</date>
303        <start>11:00</start>
304        <duration>00:15</duration>
305        <room>Elsewhere</room>
306        <abstract/>
307        <title>Break</title>
308        <type>break</type>
309        <abstract/>
310        <released>True</released>
311        <persons/>
312        <description/>
313      </event>
314      <event id="9">
315        <date>2017-08-17:00:00+00:00</date>
316        <start>12:15</start>
317        <duration>00:15</duration>
318        <room>Elsewhere</room>
319        <abstract/>
320        <title>Break</title>
321        <type>break</type>
322        <abstract/>
323        <released>True</released>
324        <persons/>
325        <description/>
326      </event>
327      <event id="12">
328        <date>2017-08-17:00:00+00:00</date>
329        <start>13:30</start>
330        <duration>01:00</duration>
331        <room>Elsewhere</room>
332        <abstract/>
333        <title>Lunch</title>
334        <type>break</type>
335        <abstract/>
336        <released>True</released>
337        <persons/>
338        <description/>
339      </event>
340      <event id="15">
341        <date>2017-08-17:00:00+00:00</date>
342        <start>15:30</start>
343        <duration>00:15</duration>
344        <room>Elsewhere</room>
345        <abstract/>
346        <title>Break</title>
347        <type>break</type>
348        <abstract/>
349        <released>True</released>
350        <persons/>
351        <description/>
352      </event>
353      <event id="18">
354        <date>2017-08-17:00:00+00:00</date>
355        <start>16:45</start>
356        <duration>00:15</duration>
357        <room>Elsewhere</room>
358        <abstract/>
359        <title>Break</title>
360        <type>break</type>
361        <abstract/>
362        <released>True</released>
363        <persons/>
364        <description/>
365      </event>
366      <event id="20">
367        <date>2017-08-17:00:00+00:00</date>
368        <start>18:00</start>
369        <duration>00:15</duration>
370        <room>Elsewhere</room>
371        <abstract/>
372        <title>Break</title>
373        <type>break</type>
374        <abstract/>
375        <released>True</released>
376        <persons/>
377        <description/>
378      </event>
379    </room>
380    <room name="Augustiner Bräustuben">
381      <event id="21">
382        <date>2017-08-17:00:00+00:00</date>
383        <start>18:30</start>
384        <duration>04:00</duration>
385        <room>Augustiner Bräustuben</room>
386        <abstract/>
387        <title>Happy Hour at Augustiner Bräustuben</title>
388        <type>break</type>
389        <abstract/>
390        <released>True</released>
391        <persons/>
392        <description>
393            Happy Hour at &lt;a
394            href="http://www.braeustuben.de/"&gt;Augustiner
395            Bräustuben&lt;/a&gt; (&lt;a
396            href="https://www.google.de/maps/place/Augustiner+Br%C3%A4ustuben/@48.1391151,11.5456626,15z/data=!4m5!3m4!1s0x0:0x1a2efa2cb8130a2a!8m2!3d48.1391151!4d11.5456626?sa=X&amp;ved=0ahUKEwjkiKqalM3VAhWKmbQKHTYXBiAQ_BIIlgEwDg"
397            &gt;map link&lt;/a&gt;)
398        </description>
399      </event>
400    </room>
401  </day>
402  <day date="2017-08-18" index="2">
403    <room name="Main room">
404      <event id="23">
405        <date>2017-08-18:00:00+00:00</date>
406        <start>10:00</start>
407        <duration>00:30</duration>
408        <room>Main room</room>
409        <abstract/>
410        <title>Prometheus Everything, Observing Kubernetes in the Cloud</title>
411        <type>talk</type>
412        <abstract/>
413        <released>True</released>
414        <persons>
415          <person id="1013">Sneha Inguva</person>
416        </persons>
417        <description>
418          As the industry moves towards a microservices architecture, many companies are embracing container orchestration solutions. DigitalOcean is no different. Over the course of the last year, DigitalOcean’s move towards a container orchestration solution based on Kubernetes has empowered service owners to quickly and efficiently deploy and update their applications. A vital component of this, however, is a white box monitoring and alerting solution based upon Prometheus and Alertmanager.
420In this talk, you will hear of DigitalOcean’s in-cluster setup of Prometheus and Alertmanager that allows service owners to instrument their their own metrics and alerts. Listeners will hear about the architecture from both the service owner’s point of view as well as the internals that allow for the dynamic addition of alerts. I will highlight the successes of this approach - ease of use and avoiding alerting fatigue - as well as potential pitfalls and gotchas. And lastly, I will end with a discussion of future modifications.
422Individuals looking to similarly leverage Prometheus and Alertmanager to monitor their own container orchestration platforms will be able to learn from our experiences at DigitalOcean and subsequently apply key lessons learned to their own alerting use cases.
423        </description>
424        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/prometheus-everything-observing-kubernetes-in-the-cloud</full_conf_url>
425        <conf_url>/2017-munich/talks/prometheus-everything-observing-kubernetes-in-the-cloud</conf_url>
426      </event>
427      <event id="24">
428        <date>2017-08-18:00:00+00:00</date>
429        <start>9:30</start>
430        <duration>00:30</duration>
431        <room>Main room</room>
432        <abstract/>
433        <title>Cortex: Prometheus as a Service, One Year On</title>
434        <type>talk</type>
435        <abstract/>
436        <released>True</released>
437        <persons>
438          <person id="1014">Tom Wilkie</person>
439        </persons>
440        <description>
441          At PromCon 2016, I presented "Project Frankenstein: A multitenant, horizontally scalable Prometheus as a service". It's now one year later, and lots has changed - not least the name!  This talk will discuss what we've learnt running a Prometheus service for the past year, the architectural changes we made from the original design, and the improvements we've made to the Cortex user experience.
442        </description>
443        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/cortex-prometheus-as-a-service-one-year-on</full_conf_url>
444        <conf_url>/2017-munich/talks/cortex-prometheus-as-a-service-one-year-on</conf_url>
445      </event>
446      <event id="25">
447        <date>2017-08-18:00:00+00:00</date>
448        <start>10:30</start>
449        <duration>00:30</duration>
450        <room>Main room</room>
451        <abstract/>
452        <title>Alert on All the Things: Integrating Quicksilver with Prometheus</title>
453        <type>talk</type>
454        <abstract/>
455        <released>True</released>
456        <persons>
457          <person id="1015">Lorenz Bauer</person>
458        </persons>
459        <description>
460          Cloudflare provides its services from 115 data centres in 57 countries. One of the most critical systems is a key-value store that replicates configuration data to every single machine, which we recently rewrote from scratch. As developers we were early adopters of Prometheus at Cloudflare, and this talk will explain how we set up Grafana dashboards for monitoring and Alertmanager for alerting, giving us unprecedented insight. It’ll also cover the gotchas we encountered. Like that one time when we triggered 7000 alerts at once.
461        </description>
462        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/alert-on-all-the-things-integrating-quicksilver-with-prometheus</full_conf_url>
463        <conf_url>/2017-munich/talks/alert-on-all-the-things-integrating-quicksilver-with-prometheus</conf_url>
464      </event>
465      <event id="27">
466        <date>2017-08-18:00:00+00:00</date>
467        <start>11:15</start>
468        <duration>00:30</duration>
469        <room>Main room</room>
470        <abstract/>
471        <title>Integrating Prometheus and InfluxDB</title>
472        <type>talk</type>
473        <abstract/>
474        <released>True</released>
475        <persons>
476          <person id="1016">Paul Dix</person>
477        </persons>
478        <description>
479          This talk will look at the different integrations between InfluxDB and Prometheus. We'll dive into using InfluxDB for remote long term storage. Other examples will show how to use Kapacitor to scrape Prometheus metrics targets to pull data into InfluxDB and transform it on the fly to different schemas. Finally, we'll take a look at the upcoming enhancements to the Influx Query Language and possible implementation of PromQL within Influx itself for better long term integration of the two projects.
480        </description>
481        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/integrating-prometheus-and-influxdb</full_conf_url>
482        <conf_url>/2017-munich/talks/integrating-prometheus-and-influxdb</conf_url>
483      </event>
484      <event id="28">
485        <date>2017-08-18:00:00+00:00</date>
486        <start>11:45</start>
487        <duration>00:30</duration>
488        <room>Main room</room>
489        <abstract/>
490        <title>A Worked Example of Monitoring a Queue Based Application</title>
491        <type>talk</type>
492        <abstract/>
493        <released>True</released>
494        <persons>
495          <person id="1017">Laurie Clark-Michalek</person>
496        </persons>
497        <description>
498          There has been a lot of work around educating people about how to instrument their applications, and how to set up your Prometheus installation to do tons of interesting things. This talk aims to address questions around which metrics provide the most value, and why. We will go through an example of instrumenting a service in production at Qubit, and explain the rationale behind the metrics we use for alerting and dashboarding. The aim is to give viewers a concrete example of how to monitor something, and highlight the logic behind the decisions made, be they specific to this service, or generalisable to almost anything.
500Viewers should come away with the ability to implement meaningful instrumentation on their services, and a basic understanding around the answer to the questions ‘what makes a good metric’, ‘what makes a good dashboard’ and ‘what makes a good alert’. My aim is that the services that viewers write will wake people up needlessly less often, and when they wake people up, the service’s dashboards will be a boon to the responder, and not a false friend.
501        </description>
502        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/a-worked-example-of-monitoring-a-queue-based-application</full_conf_url>
503        <conf_url>/2017-munich/talks/a-worked-example-of-monitoring-a-queue-based-application</conf_url>
504      </event>
505      <event id="30">
506        <date>2017-08-18:00:00+00:00</date>
507        <start>12:30</start>
508        <duration>01:00</duration>
509        <room>Main room</room>
510        <abstract/>
511        <title>Storing 16 Bytes at Scale</title>
512        <type>talk</type>
513        <abstract/>
514        <released>True</released>
515        <persons>
516          <person id="1018">Fabian Reinartz</person>
517        </persons>
518        <description>
519          From the beginning, Prometheus was built as a monitoring system with Cloud Native environments in mind. Orchestration systems such as Kubernetes are rapidly gaining traction and unlock features of highly dynamic environments, such as frequent rolling updates and auto-scaling, for everyone. This inevitably puts new strains on Prometheus as well.
521In this talk we explore what these challenges are and how we are addressing them by building a new storage layer from the ground up. The new design provides efficient indexing techniques that gracefully handle high turnover rates of monitoring targets and provide consistent query performance. At the same time, it significantly reduces resource requirements and paves the way for advanced features like hot backups and dynamic retention policies.
522        </description>
523        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/storing-16-bytes-at-scale</full_conf_url>
524        <conf_url>/2017-munich/talks/storing-16-bytes-at-scale</conf_url>
525      </event>
526      <event id="32">
527        <date>2017-08-18:00:00+00:00</date>
528        <start>14:30</start>
529        <duration>00:30</duration>
530        <room>Main room</room>
531        <abstract/>
532        <title>Improving User and Developer Experience of the Alertmanager UI</title>
533        <type>talk</type>
534        <abstract/>
535        <released>True</released>
536        <persons>
537          <person id="1019">Max Inden</person>
538        </persons>
539        <description>
540          Alertmanager deduplicates, groups, and routes alerts from Prometheus to all
541kinds of paging services. With it comes a dated UI which does not live up to
542the expectations of the users, nor does it attract new contributors.
544From this talk, you will learn how we addressed these issues when building the
545new UI from scratch. We made it friendlier to users by removing unnecessary
546domain language noise. In addition we added new power features such as
547filtering and grouping. As a result, it is now much easier to navigate through
548thousands of alerts.
550We chose to build the new UI with Elm — a functional programming language for
551web interfaces. Elm enabled us to develop fast and with confidence by keeping a
552promise of zero runtime errors. It lowered the entry barrier for non-frontend
553developers and made the project appealing to newcomers.
554        </description>
555        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/improving-user-and-developer-experience-of-the-alertmanager-ui</full_conf_url>
556        <conf_url>/2017-munich/talks/improving-user-and-developer-experience-of-the-alertmanager-ui</conf_url>
557      </event>
558      <event id="33">
559        <date>2017-08-18:00:00+00:00</date>
560        <start>15:00</start>
561        <duration>00:30</duration>
562        <room>Main room</room>
563        <abstract/>
564        <title>Using TSDB as a library</title>
565        <type>talk</type>
566        <abstract/>
567        <released>True</released>
568        <persons>
569          <person id="1020">Goutham Veeramachaneni</person>
570        </persons>
571        <description>
572          This talk will be an introductory talk about using the new datastore as a library. It will be based on this blog: https://geekon.tech/content/post/tsdb-embeddable-timeseries-database/ with clearer and more verbose examples and a simple REST based timeseries demo at the end.
574After this there will be a short demo of how Prometheus uses tsdb internally. (The layer is minimal, but it won't hurt to explicitly mention it for new contributors)
576People will leave with:
578* A clear understanding of how to use tsdb.
579* The fundamentals that are needed to start contributing to the tsdb &lt;--&gt; Prometheus layer.
580        </description>
581        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/using-tsdb-as-a-library</full_conf_url>
582        <conf_url>/2017-munich/talks/using-tsdb-as-a-library</conf_url>
583      </event>
584      <event id="35">
585        <date>2017-08-18:00:00+00:00</date>
586        <start>15:45</start>
587        <duration>01:00</duration>
588        <room>Main room</room>
589        <abstract/>
590        <title>Staleness in Prometheus 2.0</title>
591        <type>talk</type>
592        <abstract/>
593        <released>True</released>
594        <persons>
595          <person id="1021">Brian Brazil</person>
596        </persons>
597        <description>
598          The biggest semantic change in Prometheus 2.0 is the new staleness handling. This long awaited feature means there's no longer a fixed 5 minute staleness. Now time series go stale when they're no longer exposed, and targets that no longer exist don't hang around for a full 5 minutes. Learn about how it works and how to take advantage of it.
599        </description>
600        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/staleness-in-prometheus-2-0</full_conf_url>
601        <conf_url>/2017-munich/talks/staleness-in-prometheus-2-0</conf_url>
602      </event>
603      <event id="37">
604        <date>2017-08-18:00:00+00:00</date>
605        <start>17:00</start>
606        <duration>01:00</duration>
607        <room>Main room</room>
608        <abstract/>
609        <title>Lightning Talks</title>
610        <type>talk</type>
611        <abstract/>
612        <released>True</released>
613        <persons/>
614        <description>
615          Lightning talks are 5 minutes each.
616        </description>
617        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/lightning-talks-day2</full_conf_url>
618        <conf_url>/2017-munich/talks/lightning-talks-day2</conf_url>
619      </event>
620      <event id="38">
621        <date>2017-08-18:00:00+00:00</date>
622        <start>18:00</start>
623        <duration>00:30</duration>
624        <room>Main room</room>
625        <abstract/>
626        <title>Closing</title>
627        <type>talk</type>
628        <abstract/>
629        <released>True</released>
630        <persons>
631          <person id="1001">Julius Volz</person>
632        </persons>
633        <description>
634          Julius will close the conference with a few parting words.
635        </description>
636        <full_conf_url>https://promcon.io/2017-munich/2017-munich/talks/closing</full_conf_url>
637        <conf_url>/2017-munich/talks/closing</conf_url>
638      </event>
639    </room>
640    <room name="Elsewhere">
641      <event id="22">
642        <date>2017-08-18:00:00+00:00</date>
643        <start>8:30</start>
644        <duration>01:00</duration>
645        <room>Elsewhere</room>
646        <abstract/>
647        <title>Breakfast</title>
648        <type>break</type>
649        <abstract/>
650        <released>True</released>
651        <persons/>
652        <description/>
653      </event>
654      <event id="26">
655        <date>2017-08-18:00:00+00:00</date>
656        <start>11:00</start>
657        <duration>00:15</duration>
658        <room>Elsewhere</room>
659        <abstract/>
660        <title>Break</title>
661        <type>break</type>
662        <abstract/>
663        <released>True</released>
664        <persons/>
665        <description/>
666      </event>
667      <event id="29">
668        <date>2017-08-18:00:00+00:00</date>
669        <start>12:15</start>
670        <duration>00:15</duration>
671        <room>Elsewhere</room>
672        <abstract/>
673        <title>Break</title>
674        <type>break</type>
675        <abstract/>
676        <released>True</released>
677        <persons/>
678        <description/>
679      </event>
680      <event id="31">
681        <date>2017-08-18:00:00+00:00</date>
682        <start>13:30</start>
683        <duration>01:00</duration>
684        <room>Elsewhere</room>
685        <abstract/>
686        <title>Lunch</title>
687        <type>break</type>
688        <abstract/>
689        <released>True</released>
690        <persons/>
691        <description/>
692      </event>
693      <event id="34">
694        <date>2017-08-18:00:00+00:00</date>
695        <start>15:30</start>
696        <duration>00:15</duration>
697        <room>Elsewhere</room>
698        <abstract/>
699        <title>Break</title>
700        <type>break</type>
701        <abstract/>
702        <released>True</released>
703        <persons/>
704        <description/>
705      </event>
706      <event id="36">
707        <date>2017-08-18:00:00+00:00</date>
708        <start>16:45</start>
709        <duration>00:15</duration>
710        <room>Elsewhere</room>
711        <abstract/>
712        <title>Break</title>
713        <type>break</type>
714        <abstract/>
715        <released>True</released>
716        <persons/>
717        <description/>
718      </event>
719    </room>
720  </day>