The position of Sr. Business Intelligence Data Analyst/ Engineer must possess a solid knowledge of the principles and practices of enterprise data warehouse development, data modeling and testing, and storytelling through data analytics.
Job Duties and Responsibilities
- Serve as an internal data consultant, participating in data integration discussions.
- Participate in the design, development, validation, and testing of new or revised reports. Work with user to verify results and content, develop error or exception reports when applicable, and receive user sign off on completed work.
- Analyze and evaluate highly complex business and market data; interpret data for the purpose of determining organizational/program performance, trends, and/or probability.
- Design and apply forecasting and predictive modeling techniques to enhance strategic thinking and business planning.
- Efficiently and effectively operate across multiple projects simultaneously and assume responsibility for the appropriate data/information architecture, design, and quality.
- Meet with key stakeholders to present and review data output to improve operational performance, support decisions, and enhance planning efforts.
- Mentor and train on the appropriate usage of data marts, enterprise data warehouse, and other data sources used in reporting and analytics, including reporting and visual use cases.
- Use and promote established Software Development Life Cycle (SDLC) standards, QA, and change control procedures.
- Serve as a technology advocate throughout the IT organization to help promote the effective use of the data/information architecture to meet business needs and to build sustained competitive advantage for the enterprise.
As a member of the Data Warehouse Team, you will be involved in all aspects of:
- Designing, implementing, maintaining, and supporting end-to-end ETL solutions, as well as data warehouse and cubes.
- Developing, refining, and maintaining the security, quality, and integrity of data in ETL solutions and the data warehouse at large.
- Establishing, implementing, and upholding data integration standards and methodologies.
- Creating and executing test plans for ETL and data integration solutions.
- Monitoring ETL jobs to verify execution, maintaining performance, and resolving data integration issues as they arise.
- Implementing, maintaining, and supporting the data quality, data catalog, and master data management initiatives of data marts and the enterprise data warehouse system.
- Working with Database and System Administrators to establish and enforce best practices for availability, performance, and data security.
- Privatively designing support activities around data integration, such as on-going data validation and performance tuning.
- Participating in code and design review to ensure alignment to standards and best practices.
- Reviewing existing data structures and recommend optimizations and redesigns, as warranted.
- Bachelor’s Degree in Computer Science, Information Systems, or other related field, or 7+ years equivalent work experience required.
- Minimum of 7 years’ experience designing, developing, and tuning complex, large (TB) database management systems in support of operational reporting, decision support, complex data analysis, and system integration.
- Minimum of 7 years’ experience working with and tuning Microsoft SQL Server or other similar relational database management systems.
- Minimum of 7 years’ experience in data modeling, database design (multi-dimensional and data warehouse), data integration, and ETL.
- Master’s Degree is preferred.
- Supply chain industry experience preferred.
- Knowledge of and experience with cloud data solution offerings (Azure Data Lake, Data Factory, Data Management Gateway, Azure Storage Options, DocumentDB, Data Lake Analytics, Stream Analytics, EventHubs, Azure SQL, etc.)
- Experience with big data tools: Hadoop, MapReduce, HBase, oozie, Flume, and Pig.
- Experience and knowledge of cost-optimized cloud deployments spanning compute, network, and storage.
- Experience of message queuing, stream processing, and highly scalable big data stores.
- Experience with NoSQL databases, such Cassandra, MongoDB, CosmosDB.
- Experience working with DevOps tools: ADO, Git, Jenkins, Dockers, etc.
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience in creating advanced statistics such as: regression, clustering, decision trees, exploratory data analysis methodology, simulation, scenario analysis, and modeling.
- Experience/ keen interest in exploring latest technologies and programming languages.
- Have strong interest in future path of to design, develop, and support Machine Learning technologies, algorithms, and models in support of business initiatives including:
- Determining the appropriate algorithms to solve a given problem through testing, analysis, and validation with the business.
- Data exploration and visualization to understand and define features for a given data set.
- Data model training and tuning to reduce errors and increase reliability and accuracy.
- Participating in innovation forums to identify new ways to leverage data to solve business issues.