Through complete use of your own data, you generate value; for people, for society, for customers, for businesses and governments. The first step lies in overcoming obstacles common to many organizations, to easily unlock the value of your data, external data, and to develop new applications.
KAVE forms the heart of a new ‘Big Data ecosystem’, removing practical obstacles, allowing quick, easy and consistent Big Data analysis. Find out below much more about this open source platform.
Spring from where you are, whatever you aim to reach, KAVE evolves with you.
Spring from where you are, whatever you aim to reach, KAVE evolves with you.
Sander has over 15 years of experience in large scale distributed computing,real-time systems and data processing technologies.He is responsible for new product development and global roll out of Big Data services at KPMG.
He holds a PhD in High Energy Physics (HEP) and worked at CERN, generally accepted as the cradle of Big Data processing in the world. He received a number of grants and awards related to high performance distributed computing and professor in Big Data Ecosystems at the University of Amsterdam.
Sander has been the lead of the Big Data services team during its incubation. He built a team of former CERN scientists to deliver industry solutions in Big Data. Current engagements of the team include operation al strategy, workshops, privacy analyses, sourcing strategies, turn-key solutions and more.
He is publishing regularly about the impact of Big Data on business and society in well known papers and magazines and is frequently invited as keynote speaker on major events in science and industry.
Maarten works as a Data Architect / Scientist for KPMG. In this respect he both designs end to end analytical systems and functions as a scientist within. His dual specialty allows for understanding and dealing with concerns in both areas.
He is specialized in Machine Learning which allows him to use very efficient techniques for finding value in data. Next to that he really enjoys exploring and using the newest tools and techniques available. This allowed him to become a Hadoop and Storm expert.
8 years of experience in designing and building various applications. In this respect he worked for clients in various domains including: Banking, Telecommunications, Media, Retail and Public.
During the last 3 years he was mainly focused on applying Big Data technology and designing solutions dealing with large quantities of data and parallelization.
Based on this experience Maarten became an architect of the KPMG Analytics & Visualisation Environment (KAVE). This system is the backbone of mature data science environments built by KPMG.
During last seven years with different high tech companies, Chan excelled as a lead software developer, as well as lead Scrum Master or Agile Evangelist for different challenging projects of varying nature ranging from web applications, and enterprise applications to embedded systems.
At KPMG Chan has started to work with both internal and clients work with a focus on reviewing, designing, implementing and documenting complex systems. Chan is the lead developer for designing and implementing our innovative open-source big-data platform: KPMG Analytics & Visualization Environment (KAVE).
As a specialist in Data & Analytics at KPMG in the Netherlands my focus is on leveraging cutting edge intrastructure and processing technologies in order to realize effective and reliable D&A solutions for the client.
Sanjay works as a Senior Consultant in the Data Lakes team of KPMG. He is specialized as a Java developer and a Big Data engineer and is involved in in-house asset development for the team and in the design and implementation of Data Lake solutions for clients.
Sanjay has around 10 years of experience in the IT Industry in the field of consulting and development in Big Data, Hadoop and Java projects. He has worked extensively in designing and implementing solutions in varying domains such as e-commerce, data storage and banking and has experience in analysis, design, implementation of Big Data and Java based projects.
Erik is team lead of the Big Data & Analytics team in the Netherlands. He is responsible for providing/developing strategic direction, managing daily operations and sales.
He graduated in economics and information management and specialized in agile software development. This enables him to give advice on the new possibilities of data driven innovation from a business perspective and translate that advice into scaled data driven transformations using agile management techniques.
Erik has over 10 years’ experience in (agile) project management and has experience in several industries including finance, manufacturing, telecom and government. He teaches systems development at the Tias School for Business and Society (IT-auditing).
Lodewijk graduated in Theoretical Physics at the University of Amsterdam in 2013, where he specialized in solving unstable mathematical problems using numerical methods. He has a more abstract background than the rest of the team with a more mathematical focus. This allows him to make the analyses concrete when it comes to the underlying logic and math.
Lodewijk provides insight into complex issues using various visualizations. This makes it easier to understand large and complicated problems a client might face, such that a clear solution may be implemented.
Before joining KPMG Lodewijk was a teaching assistant in mathematics and physics at the University of Amsterdam for over 3 years, teaching students as well as high school teachers.
At KPMG Lodewijk has done analytical work in several sectors including Healthcare, Retail and Insurance. He has experience handling data ranging from geodata to payment data to patient data.
Justin is a graduate student from the Delft University of Technology, majoring in System Engineering, Policy Analysis and Management. His research focuses on building a System Dynamics simulation model on the Dutch Electricity System, which predicts electricity prices towards 2030.
Justin has a more holistic background than the rest of the team, with a focus on socio-technical systems. After joining the team this summer, he will be working on translating complex analyses – executed by data scientist – to customer solutions.
Before joining KPMG Justin was an intern at Xicato, a LED lighting company in Silicon Valley. Hereafter he started working as a project manager for the University of Amsterdam, advising the organization on sustainable lighting for a large campus project.
Rob joined the KPMG big data team in 2014 having over eight years experience in Big Data analysis in high energy physics coupled with software engineering and management for a large-scale CERN experiment.
Rob is interested in designing and implementing smart analysis techniques, understanding the data to enable fast analysis turn-around, ensuring effective communication and documentation of results and methods, coupled with software management, review, testing, and distribution.
During eight years with CERN and related institutes, Rob excelled as a lead software developer, as well as lead researcher for high-energy physics publications, using globally distributed data analytics systems.
At KPMG Rob has worked with retail clients reviewing, designing, implementing and documenting complex analyses. Rob represents the Netherlands in the KPMG global data-driven architecture working group (DART), and is one half of our pair of architects designing and implementing our innovative open-source big-data platform: KPMG Analytics & Visualization Environment (KAVE).
Jori has a background in computer science and is specialized in machine learning algorithms. He is always on the lookout for opportunities to apply such algorithms to learn from historic datasets to solve real problems.
Next to that, his software engineering skills and experience are a valuable asset to the team, in that he his able to transform vague data science ideas into designs for scalable and high performant software applications without unnecessary complexity. Jori enjoys applying the latest tooling and academic breakthroughs in a practical setting.
Jori also leads the team’s Location Aware Services proposition and oversees the development of the product and rollout to multiple locations.
Jori worked in a wide range of assignments at KPMG since 2010, all related to data in one way or another, with a small side step into IT auditing. Jori joined the big data team in 2014 and decided to focus completely on big data analytics.
Ewine graduated from Delft University of Technology with her master’s Media and Knowledge Engineering in 2008, specializing in Multimedia Information Retrieval. She did research on sentiment analysis from images using facial features and from audio features. Besides having a technical background, she has a positive attitude and is helpful, open and studious. Ewine has strong analytical skills and her technical background allows her to design practical solutions to complex problems and to give strategic advise. She is a true team player and is able to communicate with developers and data scientists and understand the business.
At the TU Delft, Ewine worked as a scientific programmer, where she developed a system for automatic summarization of soccer matches using video analysis algorithms combined with interactive machine learning techniques. At KPMG she has been involved in a variety of roles and projects, including Scrum master in several development projects for a governmental organization and for internal Data and Analytics platform development and has coordinated several test teams.
Max joined the KPMG Big Data team in 2015 and has a long background in the purely data-driven field of experimental particle physics, He obtained a PhD in High Energy Physics working at Stanford, and afterwards worked for 8 years at one of the major CERN experiments. Through this work, Max has acquired advanced skills for handling and analyzing large data sets. Within our team Max has a special interest in helping other organizations and companies to become more data-driven.
Max has over 12 years of experience in the development and teaching of analysis software, software management, and data quality assurance. He is an expert in statistical data modeling and analysis, and as such has coordinated and reviewed many big data analyses, including the discovery statement of the Higgs boson particle at CERN.
At KPMG Max is involved in the development, testing and deployment of common solutions for big data analyses.
Martijn is an expert on analysing, processing, and modelling of large quantities of data. Moreover he has a great affinity with big data enabling technologies.
He holds a PhD in High-Energy Physics and worked for several years as a researcher on physics analyses and Monte Carlo simulations for the ATLAS and CMS experiments at CERN’s Large Hadron Collider. As an advisor and data scientist at KPMG he now helps other organisations in becoming more data-driven.
Through his research career, Martijn has a strong background in statistics, analysis algorithms, and distributed computing . He gained a lot of international experience by working at several large European research laboratories (e.g. CERN., DESY).
At KPMG he worked on various data analytics projects with clients. Among them a Dutch telco, exploring the added value of Big Data, an academic hospital, validating new operation room schedules with Monte Carlo simulation techniques, a media company, applying machine learning to identify online user interests and optimize advertisement w.r.t. this, and a bank in Taiwan, optimizing a credit card remediation campaign strategy.
Jan’s background is in experimental particle physics and has more than 8 years of experience in developing software frameworks and algorithms for data acquisition and processing, and physics analyses.
Jan loves to experiment and play with new technologies. The physicist inside him believes that there is a solution to every problem no matter how big or small, and loves to work out complex and abstract ideas and concepts and to implement them.
Jan has performed and published numerous data analyses, both by himself and in collaboration. For these he has designed, developed and implemented the necessary computational tools. Amongst these tools is a framework which is currently being used for determining important physics calibration values from a gargantuan amount of data measured by the LHCb experiment at the LHC, CERN, Geneva.
At KPMG Jan is involved in the development and testing of tools and algorithms for (big) data analyses and Wi-Fi tracking. Jan has also experience in reverse engineering SAP CD queries that can be used in Hadoop and Hive.