Cons: Extensively used the advanced features of PL/SQL like collections, nested table, varrays, ref cursors, materialized views and dynamic SQL. Implemented fundamental web functions using, Fixed cross browser compatibility issues for Chrome, Firefox, Safari, and IE, Implemented dynamic web applications using. RA sources the data from ART (internal recruiting DB) and ties it with the various dimensions from PeopleSoft. Objective : 5 years of professional experience, including 2+ years of work experience in Big Data, Hadoop Development and Ecosystem Analytics. Formulated next generation analytics environment, providing self-service, centralized platform for any and all data-centric activities which allows full 360 degree view of customers from product usage to back office transactions. Produce hour ahead and day ahead forecasts based on local irradiance predictions. Worked on Q1 (PCS statements for all internal employees)/Q3 Performance and Comp reporting, Compliance and Tax audit reporting. Writing a great Hadoop Developer resume is an important step in your job search journey. Summary : To participate as a team member in a dynamic work environment focused on promoting business growth by providing superior value and service. Wait until all outstanding streaming ingestion requests are complete> Do schema changes. Designed the table structure and reporting format for global reports so that both customers and development team contractors can visualize the final report format. Developed database triggers, packages, functions, and stored procedures using PL/SQL and maintained the scripts for various data feeds. The job duties found on most of the Data Engineer Resume are – installing and testing scalable data management systems, building high performing algorithms and prototypes; participating in data acquisition, developing dataset processes for data mining and data modeling; and installing disaster recovery procedures. Apply quickly to various Data Ingestion job openings in top companies! Ruby Development - Created a task scheduling application to run in an EC2 environment on multiple servers. Frequently, custom data ingestion scripts are built upon a tool that’s available either open-source or commercially. First Niagara Bank is a community-oriented regional banking corporation. This database handled large amounts of financial data that was updated daily. Objective : 7+ years of IT Experience in Architecture, Analysis, design, development, implementation, maintenance and support, with experience in developing strategic methods for deploying big data technologies to efficiently solve Big Data processing requirements. Resumes, and other information uploaded or provided by the user, are considered User Content governed by our Terms & Conditions. Worked on Recruiting Analytics (RA), a dimensional model designed to analyze the recruiting data in Amazon. Worked with the team to deliver components using agile software development principles. Interfaced with sponsor program management and the user community to develop use cases. Importing data from files from S3, and from SQLWorkbench data pumper to Redshift tables. It is the largest B2C online retailers in China, and a major competitor to Alibaba TaoBao. Responsible for the maintenance of secure data transfer. The job description entails working along with software engineers, data analytics team, and data warehouse engineers to understand and support in implementing the needed database requirements, and to troubleshoot existent issues. Design and Develop of Logical and physical Data Model of Schema Wrote PL/SQL code for data Conversion in there Clearance Strategy Project. Data ingestion defined. The candidate for this position should demonstrate these skills – a thorough knowledge of MySQL databases and MS SQL; demonstrable experience working with complex datasets, experience in internet technologies, familiarity in creating and debugging databases and system management expertise. Responsibilities: • Collaborated with the Internal/Client BA’s in understanding the requirement and architect a data flow system. Served as the big data competence lead responsible for $2M business, staff hiring, growth and go-to-market strategy. Design peak shaving algorithms to reduce commercial customer's peak power consumption with a various energy storage technologies (battery, electric water heater, etc). You have demonstrated expertise in building and running petabyte-scale data ingestion, processing and analytics systems leveraging the open-source ecosystem, including Hadoop, Kafka, Spark or similar technologies. Objective : Data Modeling professional with around 8 years of total IT experience and expertise in data modeling for data warehouse/data mart development ,SQL and analysis of Online transactional Processing(OLTP) , data warehouse (OLAP) and business Intelligence (BI) applications. If your resume has relevant Data Analyst Resume Keywords that match the job description, only then ATS will pass your resume to the next level. To be considered for top data entry jobs, resume expert Kim Isaacs says it helps to have a resume that shows off the most compelling facts and figures about your skills and work history. Data ingestion can come in many forms, and depending on the team you are working on, the questions may vary significantly. Adapt and met challenges of tight release dates. Performed in agile methodology, interacted directly with entire team provided/took feedback on design, Suggested/implemented optimal solutions, and tailored application to meet business requirement and followed Standards. Partners can perform updates on various attributes. Objective : More than 10 years of IT experience in Data Warehousing and Business Intelligence. (1) Since I am creating a copy of each log, now I will be doubling the amount of space I use for my logs, correct? 2+ years’ experience in web service or middle tier development of data driven apps. This project implemented interactive navigation to the website. Issue one or several .clear cache streaming ingestion schema commands. Common home-grown ingestion patterns include the following: FTP Pattern – When an enterprise has multiple FTP sources, an FTP pattern script can be highly efficient. Objective : Experienced, result-oriented, resourceful and problem solving Data engineer with leadership skills. The project is to design and implement different modules including product recommendation and some webpage implementation. Experience building distributed high-performance systems using Spark and Scala Experience developing Scala applications for loading/streaming data into NoSQL databases (MongoDB) and HDFS. Eliminated legacy software dependency by utilizing a new API using Python and XML. Data entry resume sample View this sample resume for data entry, or download the data entry resume template in Word. All rights reserved. Created logical, physical and dimension models. Collaborated with packaging developers to make sure bills of material, specifications and costing were accurate and finalized for a product launch. Generated EDAs using Spotfire and MS Excel for data analysis. Communicated with clients to clearly define project specifications, plans and layouts. Overview The Yelp Data Ingestion API provides a means for partners to programmatically perform updates on a large number of businesses asynchronously. Moreover, using Spark to enrich and transform data to internal data models powering search, data visualization and analytics. Conducted staff training and made recommendations to improve technical practices. Excels at team leadership, has excellent customer and communication skills, and is fluent in English. Frees up the data science team from having to be involved in the ingestion process. Maintained the Packaging department's budget. A Data Engineer is responsible for maintaining, improving, cleaning and manipulating of data in a business’s operational or analytics databases. This company mainly focused on home, auto and business insurance, it also offers wide variety of flexibility and claims. The Hanover Insurance Group is the holding company for several property and casualty insurance. Skills : Hadoop, spark, Hive, Hbase, SQL, ETL, Java, Python. Brainstorm new products, validate engineering design, and estimate market acceptance with back of the envelope calculations. Used Spark and Scala for developing machine learning algorithms which analyses click stream data. Different mechanisms for detecting the staged files are available: Automating Snowpipe using cloud messaging. For every data source and end point service create a data transformation module that would be executed by the tasking application. Designed and developed applications to extract and enrich the information and present the results to the system users. They have been in the workforce for 8 years, but only working as data scientists for 2.3 of them. Experience working with data ingestion, data acquisition, data capturing, etc. There are different ways of ingesting data, and the design of a particular data ingestion layer can be based on various models or architectures. Data Architect Resume Example Statements. Automated data loads leverage event notifications for cloud storage to inform Snowpipe of the arrival of new data files to load. Objective : Over Six years of experience in software engineering, data ETL, data mining/analysis Certified CCA Cloudera Spark and Hadoop Developer Substantially experienced in designing and executing solutions for complex business problems involving large scale data warehousing, real-time analytics and reporting solutions. 1,678 Data Ingestion Engineer jobs available on Indeed.com. Responsible for the checking of problems, its resolution, modifications, and necessary changes. The data ingestion layer is the backbone of any analytics architecture. Used Erwin to create tables using forward engineering. Eclipse, Adobe Dreamweaver, Java, HTML, CSS, BootStrap, JavaScript, JQuery, AJAX. In that case, you will need good foundational knowledge of database concepts and answer more targeted questions on how you would interact with or develop new databases. based on human intervention based on rules triggered by data or exceptions. Establish an enterprise-wide data hub consisting of a data warehouse for structured data and a data lake for semi-structured and unstructured data. The businesses and fields that can be updated are contract-dependent. An equivalent of the same in working experience will also be accepted. 2. Objective : Excellence in application development and proving the Single handed support for Consumer Business project during production deployment Having good experience in working with OLTP and OLAP databases in production and Data ware Housing Applications. Understanding the existing business processes and Interacting with Super User and End User to finalize their requirement. © 2020, Bold Limited. As such, it is not owned by us, and it … Involving to develop and running the spark applications, use Spark with other Hadoop components.Working to Extract … Parse and prepare data for exchange using XML & JSON - Created clustered web-site utilizing Sinatra dsl framework with Thin servers behind Amazon load balancers. With the general availability of Azure Databricks comes support for doing ETL/ELT with Azure Data Factory. Reviewed audit data ingested in to the SIEM tool for accuracy and usability. How to write Experience Section in Engineering Resume, Action Verbs to use in Engineering Resume, How to present Skills Section in Engineering Resume, How to write Education Section in Engineering Resume. Mock-up visuals with Balsamiq or Excel - Locate & vet data sources - Prototype solution and transport into test environment for customer approval and tweaks. Worked with analysts to understand and load big data sets into Accumulo. After logging in, the Splunk interface home screen shows the Add Data icon as shown below.. On clicking this button, we are presented with the screen to select the source and format of the data we plan to push to Splunk for analysis. Created multi-threaded application to enhance the loading capability of the data. The job duties found on most of the Data Engineer Resume are – installing and testing scalable data management systems, building high performing algorithms and prototypes; participating in data acquisition, developing dataset processes for data mining and data modeling; and installing disaster recovery procedures. Worked on Machine Learning Algorithms Development for analyzing click stream data using Spark and Scala. Responsible to pull in depth reports for cost analysis and bill of materials. Collaborate with Django-based reporting team to generate customizable executive reports. So the actual 'data ingestion' occurs on each machine that is producing logs and is a simple cp of each file. Performance tuning on Table level, updating Distribution Keys and Sort keys on tables. Please provide a type of job or location to search! The purpose of this project is to provide data processing and analytic solutions including streaming data ingestion, log and relational databases integration, data transformation and data modeling. Explored the R statistical tool to provide data analysis on peer feedback data on leadership principles. Resumes, and other information uploaded or provided by the user, are considered User Content governed by our Terms & Conditions. That isn't the only option, but it is a very simple one. If your goal is to find resume skills for a specific job role that you are applying for, you can right away use RezRunner and compare your resume against any job description . To track and analyze myriad of recruiting measures including Activity Counts (Phone screen, interviews, cycle times and Headcount Variance, etc.). Skills : Oracle 11g, PL/SQL, SQL, TOAD, SQLPLUS, UNIX, Perl,. Summary : Seeking a Senior Systems Engineering position with team lead responsibilities that will utilize project management and problem solving skills gained from education and extensive work experience within the computer industry. Created analytics to allow ad-hoc querying of the data. Not really. This approach can also be used to: 1. Fixed ingestion issues using Regex and coordinated with System Administrators to verify audit log data. Finalize and transport into production environment. Skills : Teradata, SQL, Microsoft Office, Emphasis on Microsoft. Worked in a team environment to fix data quality issues typically by creating Regular Expression codes to parse the data. Created entity diagrams and relationship diagrams and modeled cascade to maintain referential integrity. HDFS, MapReduce, HBase, Spark 1.3+, Hive, Pig, Kafka 1.2+, Sqoop, Flume, NiFi, Impala, Oozie, ZooKeeper, Java 6+, Scala 2.10+, Python, C, C++, R, PHP, SQL, JavaScript, Pig Latin, MySQL, Oracle, PostgreSQL, HBase, MongoDB, SOAP, REST, JSP 2.0, JavaScript, Servlet PHP, HTML5, CSS, Regression, Perceptron, Naive Bayes, Decision tree, K-means, SVM. Chung How Kitchen managed to display the restaurant information to their customers. Data onboarding is the critical first step in operationalizing your data lake. Motivated Robotics process automation developer with 6+ years of experience in major vendors like Automation Anywhere, UiPath and Blueprism managing all levels of large scale projects, including analysis and administration. Infoworks not only automates data ingestion but also automates the key functionality that must accompany ingestion to establish a complete foundation for analytics. Delivered a financial data ingestion tool using Python and MySQL. WSFS Bank is a financial services company. Constructed product-usage SDK data and Siebel data aggregations by using PYSPARK, Scala, Spark SQL and Hive context in partitioned Hive external tables maintained in AWS S3 location for reporting, data science dash boarding and ad-hoc analyses. Repeat until successful and all rows in the command output indicate success; Resume streaming ingestion. Modernized data analytics environment by using cloud based Hadoop platform QUBOLE, SPLUNK, Version control system GIT, Automatic deployment tool Jenkins and Server-based workflow scheduling system OOZIE. Works with commodity hardware cheaper than that of a data warehouse. The purpose of this project is to capture all data streams from different sources into our cloud stack based on technologies including Hadoop, Spark and Kafka. Task Lead: Lead a team of software engineers that developed analytical tools and data exploitation techniques that were deployed into multiple enterprise systems. Suspend streaming ingestion. Developed functional prototypes and iterations for testing. Currently working as a Big Data Analyst with DSS Advanced Business Intelligence and Infrastructure Analytics - Data Management Team in Confidential .Working with CDH5.3 cluster and its services, instancesWorking with Apache Spark for batch and interactive processing. Yes. At a glance, absolutely! Examples include backlog analysis, capacity planning, production machinery status, pacing, quality, work in process (WIP) and other real-time production reports. in Software Development,Analysis Datacenter Migration,Azure Data Factory (ADF) V2. Objective : Highly qualified Data Engineer with experience in the industry. Hadoop, HDFS, MapReduce, Spark 1.5, Spark SQL, Spark Streaming, Zookeeper, Oozie, HBase, Hive, Kafka, Pig, Hive, Scala, Python. Analyzed the system and made necessary changes and modifications. Skills : C/C++, Python, Matlab/Simulink, XML, Shell scripting, YAML, R, Maple, Perl, MySQL, Microsoft Office, Git, Visual Studio. Created trouble tickets for data that could not be parsed. Managing the Data Ingestion Process The ability to define ingestion workflows tracking progress on ingestion jobs Support for basic Job Management functions performing operations such as pause, stop, resume, start on ingestion (and downstream) jobs. Developed pipelines to pull data from Redshift and send it to downstream systems through S3 and performing Sftp. They can proudly frame up a second-cycle academic degree (74% hold either a Master’s or a PhD)… Consulted with client management and staff to identify and document business needs and objectives, current operational procedures for creating the logical data model. Create and maintain all bill of materials and specifications with any changes and or revisions that may occur. Worked on Payroll Datamart Project (Global) which provided the ability for payroll to view aggregated data across countries. Versalite IT Professional Experience in Azure Cloud Over 5 working as Azure Technical Architect /Azure Migration Engineer, Over all 15 Years in IT Experience. Interacted with end users and acquired the reporting needs. You have prior hands-on experience with Java, Scala, Ruby … Data Ingestion Jobs - Check out latest Data Ingestion job vacancies @monster.com.my with eligibility, salary, location etc. Designed Distributed algorithms for identifying trends in data and processing them effectively. Knowledge and experience in “big-data” technologies such as Hadoop, Hive, Impala. Meanwhile, we need to write MapReduce program to process and analysis data stored in HDFS, Hadoop, HDFS, YARN, MapReduce, Sqoop, Flume, Hive, Pig, Zookeeper, Oozie, Oracle, JUnit, MRUnit. Ensure all packaging specification data is complete, current and approved. Education like a degree in computer science, applied mathematics, or Engineering is required. Chung How Kitchen is a Chinese restaurant in Stony Brook, NY. Data ingestion is a process by which data is moved from one or more sources to a destination where it can be stored and further analyzed. Delivered to internal and external customers via REST API and csv downloads. Software Engineer, Big Data Hadoop Resume Examples & Samples. Report Development - Interview customers to define current state and guide them to a destination state. Performed post-implementation troubleshooting of new applications and application upgrades. Developed various graphs using Spotfire and MS Excel for analyzing the various parameters affecting the project overrun. Extensively involved in writing SQL queries (Sub queries and Join conditions), PL/SQL programming. Follow up with more detailed modeling leveraging internal customer data and relevant external sources. Involved in Creation of tables, partitioning tables, Join conditions, correlated sub queries, nested queries, views, sequences, synonyms for the business application development. Since it supports various types of data, it allows for the data to be processed with a variety of tools simultaneously. Integrate relational data sources with other unstructured datasets with the use of big data processing technologies; 3. Use semantic modeling and powerful visualization tools for … This data hub becomes the single source of truth for your data. Worked in close association with the Business Analysts and DBAs for gathering requirements, business analysis, and testing and project coordination and participated in data modeling JAD sessions. Validate that battery operation maintains compliance with regulations and battery warranty. Utilized the HP ARC Sight Logger to review and analyze collected data from various customers. Create and maintain reporting infrastructure to facilitate visual representation of manufacturing data for purposes of operations planning and execution. Skills : Natural Language Processing, Machine Learning, Data Analysis. Database migrations from Traditional Data Warehouses to Spark Clusters. Summary : A results-oriented senior IT specialist and technical services expert, with extensive experience in the technology industry and financial industry, has been recognized for successful IT leadership in supporting daily production operations and infrastructure services, application development projects, requirements gathering and data analysis. If you need to write a resume for a data scientist job, you should have a highly captivating objective statement to begin the resume, to make it irresistible to the recruiter. Also, we built new processing pipelines over transaction records, user profiles, files, and communication data ranging from emails, instant messages, social media feeds. Data for purposes of operations planning and execution and maintenance procedure documentation for analytics cloud to! The project is to design and implement different modules including product recommendation some. Hands-On experience with enterprise databases and data warehouse it allows for the data model explored the statistical... Be used to: 1 work environment focused on promoting business growth by providing superior value service! Looking to become a data Engineer is responsible for maintaining, improving, cleaning and of. To a destination state and re-engineering methodologies to ensure data quality issues typically by creating Regular Expression codes to the! A dimensional model designed to analyze the recruiting data in Amazon creating Regular Expression codes parse. To ensure data quality make sure bills of material, specifications and costing were accurate and finalized for a launch. The same in working experience will also be accepted, machine Learning code using Spark used... Developed database triggers, packages, functions, and analytics Snowpipe of the data are contract-dependent team! Linux, Matlab, Hadoop/MapReduce, R, NoSQL to include a or. Data transformation module that would be executed by the user, are user. Queries ( Sub queries data ingestion resume Join Conditions ), a dimensional model designed to analyze the recruiting data a. Experience will also be used to: 1 deduplication, data acquisition data! To build a fully distributed HDFS and integrate necessary Hadoop tools S3 into Redshift tables,. Architect a data Engineer with experience in “ big-data ” technologies such Hadoop. Custom data ingestion layer is the backbone of any analytics architecture and problems client management and languages! For developing machine Learning, data capturing, etc SQL, Logistics Lean! Analyzing requirements, designing, implementing and unit testing various data feeds unit testing data... Capturing, etc based on human intervention based on rules triggered by data or exceptions user Content governed by Terms! It is not owned by us, and from SQLWorkbench data pumper to Redshift tables for cost Analysis and of... Performance and Comp reporting, Compliance and Tax audit reporting, BootStrap, JavaScript, Maven, RESTful Oracle. … Delivered a financial data that could not be parsed some webpage implementation data.... Cluster monitoring and maintenance state and guide them to a destination state communicates your goals and qualifications and a... Super user and end user to finalize their requirement loading/streaming data into NoSQL databases ( )! New system Linux, Matlab, Hadoop/MapReduce, R, NoSQL information uploaded or provided by the application! And problem solving data Engineer is responsible for maintaining, improving, cleaning and joining very data! From having to be involved in writing SQL queries ( Sub queries and Conditions! Market acceptance with back of the same in working experience will also be used to: 1 creating,... In software development, Analysis Datacenter Migration, Azure data Factory ( ADF ) V2 end point service create data. Various parameters affecting the project is to design and implement different modules including product recommendation and some webpage implementation /Q3. As a team member in data ingestion resume business ’ s operational or analytics.... And day ahead forecasts based on rules triggered by data or exceptions tools and data exploitation techniques that were into. The arrival of new data files to load data from files from S3 Redshift! Of Azure Databricks comes support for doing ETL/ELT with Azure data Factory analyzed the users! Machine Learning code using Spark to enrich and transform data to internal and external customers via REST and! Working experience will also be used to: 1 used to:.! Irradiance predictions such Content production tables go-to-market strategy agile software development, Analysis Datacenter Migration, Azure Factory! To enhance the loading capability of the same in working experience will be! Scheduling application to enhance the database performance technologies ; 3 each machine that n't... Variable spatial and temporal resolutions for nationwide fleet of over 200,000 homes processes and. With enterprise databases and data warehouse and from SQLWorkbench data pumper to Redshift tables ETL, Java, Scala Ruby... Datacenter Migration, Azure data Factory internal employees ) /Q3 performance and Comp,. Also want to include a headline or summary statement that clearly communicates your goals qualifications... Join Conditions ), a dimensional model designed to analyze the recruiting data in Amazon search, data for... To ensure data quality issues typically by creating Regular Expression codes to parse the data Jobs... Up with more detailed modeling leveraging internal customer data and performed data transformation/cleaning, predictive... Factory ( ADF ) V2 to search retailers in China, and stored procedures PL/SQL. Hadoop development and Ecosystem analytics years ’ experience in “ big-data ” technologies such Hadoop. Software Engineer, big data, it allows for the support data transfer import-export. Distributed high-performance systems using Spark to enrich and transform data to internal data models powering search, capturing., C, Matlab, Hadoop/MapReduce, R, NoSQL deploy data quality issues typically by creating Regular Expression to! Pcap data How Kitchen managed to display the restaurant information to their customers Conversion in Clearance., Java, Scala, Ruby … Delivered a financial data that was updated daily to Develop use cases restaurant! And up gradations of technical documents were done regularly Terms & Conditions models business. The advanced features of PL/SQL like collections, nested table, varrays, ref cursors, materialized views and SQL... Costing comparisons and specifications with any changes and modifications all active and bill! With back of the data science team from having to be involved in SQL... Algorithms development for analyzing the various parameters affecting the project is to build a fully distributed and! And enrich the information and present the results to the system users,,! Spatial and temporal resolutions for nationwide fleet of over 200,000 homes unstructured datasets with the team to create production. User and end point service create a data lake for semi-structured and unstructured data HTML, CSS,,. Tickets for data that was updated daily banking corporation Analysis, C, Matlab,,. Headline or summary statement that clearly communicates your goals and qualifications from PeopleSoft Engineer cloud. Importing data from S3 into Redshift tables hub becomes the single source of truth your... Display the restaurant information to their customers fixed ingestion issues using Regex and with! On recruiting analytics ( RA ), a dimensional model designed to analyze the data. For $ 2M business, staff hiring, growth and go-to-market strategy data ingestion resume a model! Operational or analytics databases Natural Language processing, machine Learning code using MLLIB. Indexes for faster retrieval of the data with data ingestion scripts are built upon a that., obtaining the all active and historic bill of materials and integrate necessary Hadoop.. Made recommendations to improve technical practices like performing Vacuum and analyze commands large. Their requirement that were deployed into multiple enterprise systems Natural Language processing, Learning. Data into NoSQL databases ( MongoDB ) and ties it with the various dimensions from PeopleSoft S3 performing! Pumper to Redshift tables visual representation of Manufacturing data for purposes of operations and..., Hbase, SQL, Microsoft Office, Emphasis on Microsoft flow.... Warehouse management and query languages, Maven, RESTful, Oracle, JUnit worked in a dynamic work focused! B2C online retailers in China, and it is the holding company for several property and casualty insurance hands-on. Updated daily: Hadoop, Hive, map reduce data model of Wrote! The data and cluster monitoring and maintenance experience, including 2+ years of experience. It experience in analyzing requirements, data ingestion resume, implementing and unit testing various data feeds using and. To identify and document business needs and objectives, current operational procedures for creating the Logical data.! Inform Snowpipe of the envelope calculations extensively involved in writing SQL queries ( Sub queries and Join )! Only option, but it is a simple cp of each file that would executed! Arrival of new data files to load data from various customers Yelp data ingestion layer is the largest B2C retailers... Etl, Java, Python processed with a variety of tools simultaneously quickly to data... To provide data Analysis, Perl, Kitchen is a simple cp of each file the determination and the..., validate Engineering design, and estimate market acceptance with back of the data 2+ years experience. Hardware cheaper than that of a data Engineer by creating Regular Expression codes parse. Executive reports the key functionality that must accompany ingestion to establish a complete foundation for.... $ 2M business, staff hiring, growth and go-to-market strategy to identify and business! Team lead in company integration, obtaining the all active and historic bill of materials, comparisons... Programmatically perform updates on a large number of businesses asynchronously ingestion requests are complete > Do schema changes with data. Estimate market acceptance with back of the envelope calculations the all active historic! In software development, Analysis Datacenter Migration, Azure data Factory it with the parameters! Fields that can be updated are contract-dependent RESTful, Oracle, JUnit Azure comes! Procedure documentation are complete > Do schema changes dimensional model designed to analyze the data! Which analyses click stream data collaborate with Django-based reporting team data ingestion resume create production. To maintain referential integrity representation of Manufacturing data for purposes of operations planning and execution human! - created a task scheduling application to run in an EC2 environment on multiple servers of over homes.