Introduction of a Web API Execution Environment Based on a Server-less Architecture Using Apache Camel – Tsunayoshi Egawa
The presentation will be an introduction of a Web API execution environment based on a server-less architecture using Apache Camel.
Although Yahoo! JAPAN has a lot of Web APIs, instead of focusing on service development, the engineers had to spend considerable man-hours in the disclosure and maintenance (setting up of servers, vulnerability response, etc.).
To solve this issue, a system was created that specializes in the Web API execution environment (UTOPIA).
Web API development in UTOPIA can be done by XML DSL and through a simple setting of information.
UTOPIA uses Apache Camel as a routine engine to develop the business logic.
Although the system is a closed internal system, this presentation will show what Yahoo! JAPAN has learnt from running the system, issues found in production use, and comparison between existing similar systems and mechanism such as OSGi and PaaS.
Apache Tika detects and extracts metadata and text from a huge range of file formats and types. From Search to Big Data, single file to internet scale, if you’ve got files, Tika can help you get out useful information!
Apache Tika has been around for nearly 10 years now, and with the passage of all that time, plus the new 2.0 release, a lot has changed. Not only has there been a huge increase in the number of supported formats, but the ways of using Tika have expanded, and some of the philosophies on the best way to handle things have altered with experience. Tika has gained support for a wide range of programming languages to, and more recently, Big-Data scale support.
Whether you’re an old-hand with Tika looking to know what’s hot or different with 2.0, or someone new looking to learn more about the power of Tika, this talk will have something in it for you!
Distributing Configuration with Apache Tamaya – Anatole Tresch
In this talk we will show how Tamaya can be used to configure a distributed system based on Docker containers. Hereby we will use Consul or etcd as a backend for reading configuration common to all components and combine it with environment specific entries for each instance. Also we will update the configuration during runtime and trigger corresponding configuration change events to adopt our code based on the changes implied. As a result we will have a good overview about the API and SPI of Tamaya and why it should be an important component in every project nowadays.
Is it a panel? Is it a talk? It is a Podling Shark Tank! Back by popular demand with even sharkier judges! What is it, you ask? Well, this is just like Shark Tank TV show (think speed dating between entrepreneurs and investors) but instead of ÛÏSquirrel BossÛ and ÛÏMan CandleÛ you’ll be hearing pitches for Apache Incubator projects. Also instead of Mark Cuban and Kevin O’Leary you’ll be pitching to the panel of ASF grey beards (trying to convince them that your project is worthy of their esteemed attention and endorsement). There will be snark, there will be prizes, there will be reciting of Apache Way creed. But most of all there will be fun. We guaranteed that!
Commercial Reasons Your Colleagues Should be Community First – Gregory Chase
The Apache Way prescribes ÛÏCommunity over CodeÛ, asking participants to think community-first. This may seem to run counter to commercial interests of for-profit companies. Yet, many contributors in ASF participate because its part of their job.
WouldnÛªt it be nice if the rest of your company also saw the commercial benefit of thinking community-first in their work?
This session dispels the myth that open source community action needs to be separate from commercial development and sales. WeÛªll discuss how to help your coworkers increase the impact of their daily work, meeting both the needs of growing the business and the community. They can do this with very little overhead, and enhance the impact of collective work for the benefit of users and customers. We’ll discuss some theory, and explore what’s worked at my pervious companies, and what the future trends look like.
But We’re Already Open Source! Why Would I Want to Bring My Code to Apache? – Nick Burch
Open Source – that’s just a tick box, right? No? Anyway, we’re open source, so that’s all we need, right? No? OK, so what’s this Apache thing, and why might we want to take our existing open source project to the Apache Software Foundation? And why might we not!
Join us as we look at several real world examples of where companies have chosen to contribute their existing open source code to the Apache Software Foundation. We’ll see the advantages they got from it, the problems they faced along the way, why they did it, and how it helped their business. We’ll also look briefly at where it may not be the right fit.
How to Get Your Release Through the Incubator – Justin Mclean
All podling releases need to voted on by the incubator PMC before being released to the world. I’ll go though what the incubator PMC looks for in every release and what you can do to make it pass that vote and get your project one step closer to graduation.
In this talk I’ll describe current incubator and ASF policy, recent changes that you may not be aware of, and go into detail the legal requirements of common licenses and the best way to assemble your NOTICE and LICENSE files. Where possible I describe the reasons behind why things are done a certain which may not always be obvious from our documentation.
I’ll show how I review a release and the simple tools I use. I’ll go through a worked example or two, including a fictional project called Apache Wombat, and cover common mistakes I’ve seen in releases and finally where you can get help if you need it.
How to Be a Bad Mentor for a Struggling Podling Subject to Terrible Policies – Roman Shaposhnik
With major apologies to David Patterson for stealing the title of his last lecture https://s.apache.org/how_to_be_a_bad_professor (I guess he taught us well after all!) I would like to present this talk that focuses on Apache Incubator, its podlings and mentors. This presentation will start with an overview of common mistakes that I have observed while mentoring on my own and seeing others do the same. We will proceed to talking about misconceptions on both sides that make Apache Incubator policies appear daunting and bureaucratic. We will conclude by a 10 steps program aimed at helping podling master the Apache Way and graduate as quickly as possible. Finally, a few battle stories will be shared and wounds put on display. This former VP of Incubator has a few to show.
While in the past it was only possible to build Flex applications with Maven using Flexmojos, now we have started creating a brand-new plugin as part of the Apache Flex project. While we are still missing a hand full of features, the path has been set and the new plugin is a much more lightweight implementation.
Parallel to this we also completely refactored the entire FlexJS project to be buildable with Maven. Both these efforts now allow us to officially publish Maven artifacts of Apache FlexJS and hereby speed up the time for people to get started with FlexJS.
In this talk I would not only like to introduce the basic functionality of our new flexjs-maven-plugin, but also the reasons and strategies we had in migrating the project from Ant to Maven.
Structuring Medical Records with Apache Stanbol – Rafa Haro, Athento & Antonio David Perez Morales
Apache Stanbol (https://stanbol.apache.org) is a top level Apache project which main objective is to provide a set of reusable components for semantic content management. Built from a extremely modular point of view, Stanbol is an OSGi based framework that uses a number of Apache tools like Solr, Felix, OpenNLP, Clerezza, Tika, Sling and Jena.
The intention of this talk is double. On one hand, we will offer a detailed overview of the current situation of the project, from the technical and community point of view. On the other hand, we will showcase a real use case where Stanbol is being used for structuring text medical records in several languages. Through this real use case explanation, we will be exposing how Stanbol can be used for things like: processing multi-language text, use semantic datasets for Content Enhancement and NLP tasks like Fact Extraction and Negation Detection