Large-scale Entity Extraction and Probabilistic Record Linkage

Large-scale Entity Extraction and Probabilistic Record Linkage

Tuesday, August 19, 2014

Large-scale entity extraction, disambiguation and linkage in Big Data can challenge the traditional methodologies developed over the last three decades. Entity linkage, in particular, is cornerstone for a wide spectrum of applications, such as Master Data Management, Data Warehousing, Social Graph Analytics, Fraud Detection and Identity Management. Traditional rules based heuristic methods usually don’t scale properly, are language specific and require significant maintenance over time.

We will introduce the audience to the use of probabilistic record linkage, also known as specificity based linkage, on Big Data, to perform language independent large-scale entity extraction, resolution and linkage across diverse sources. We will also present a live demonstration reviewing the different steps required during the data integration process (ingestion, profiling, parsing, cleansing, standardization and normalization), and show the basic concepts behind probabilistic record linkage on a real-world application.

About the Author

Dr. Flavio Villanustre is the Vice President of Infrastructure and Products for HPCC Systems, the open source Big Data processing platform for LexisNexis. In this position, Flavio is responsible for Information and Physical Security, overall infrastructure strategy and new product development. Prior to 2001, Dr. Villanustre served in different companies in a variety of roles in infrastructure, information security and information technology. In addition, Dr. Villanustre has been involved with the Opensource community for over 15 years through multiple initiatives. Some of these include founding the first Linux User Group in Buenos Aires (BALUG) in 1994, releasing several pieces of software under different Opensource licenses, and evangelizing Opensource to different audiences through conferences, training and education. Prior to his technology career, Dr. Villanustre was a neurosurgeon.
Posted in Meetings
AJUG Meetup

Data Microservices with Spring Cloud Stream, Task, and Data Flow

Tuesday, July 19, 2016

Microservice based architectures are not just for distributed web applications! They are also a powerful approach for creating distributed stream and batch processing.

Spring Cloud Data Flow enables you to create and orchestrate standalone executable applications that communicate over messaging middleware such as Kafka and RabbitMQ that when run together, form a distributed stream processing application. It also allows users to create and orchestrate short lived microservices like batch jobs or boot applications that perform a task and then terminate when complete.

This allows you to scale, version and operationalize stream processing and task applications following microservice based patterns and practices on a variety of runtime platforms such as Cloud Foundry, Apache YARN and others.

Location:


Holiday Inn Atlanta-Perimeter/Dunwoody

4386 Chamblee Dunwoody Road,
Atlanta, GA (map)

AJUG Tweets

Follow @atlantajug on twitter.

Recent Jobs