You have exhausted your free quota. Please become a paid member and apply to unlimited jobs! A referral from an employee increase your chances by 80%. interviewChacha has helped 27000+ Job seekers land a job in last 1 year. Your money is what keeps this platform running.
Send your resume directly to and ask for referral.
Mention "interviewchacha.com" when you apply.
**You can't Quick Apply to moderator posted jobs.
Applying to Teradata: need details
Job Description
Minimum Requirements –
Mandatory:
Minimum experience of 3-6 years in Managing and Supporting large scale Production Hadoop environments (configuration management, monitoring, and performance tuning) in any of the Hadoop distributions (Apache, Hortonworks, Cloudera, MapR, IBM BigInsights, Pivotal HD)
3-6 years of experience in Scripting Language (Linux, SQL, Python). Should be proficient in shell scripting
3-6 years of experience on Administrative activities likes-
Administration, maintenance, control, and optimization of Hadoop capacity, security, configuration, process scheduling, and errors.
Management of data, users, and job execution on the Hadoop System
Experience in Backup, Archival and Recovery (BAR) and High availability (HA)
Plan for and support hardware and software installation and upgrades.
3-6 years of Experience in Hadoop Monitoring tools (Cloudera Manager, and Ambari, Nagios, Ganglia etc).
Experience may include (but is not limited to) build and support including design, configuration, installation (upgrade), monitoring and performance tuning of any of the Hadoop distributions
Hadoop software installation and upgrades
Experience of workload / performance management
Automation - experience in CI / CD (Continuous Integration / Deployment) – Jenkins, Ansible, Terraform, Puppet, Chef.
Implementing standards and best practices to manage and support data platforms as per distribution.
Proficiency in Hive internals (including HCatalog), SQOOP, Pig, Oozie and Flume/Kafka.
Experience in MySQL & PostgreSQL databases.
ITIL Knowledge.
Preferred:
Experience with DR (Disaster Recovery) strategies and principles.
Development or administration on NoSQL technologies like Hbase, MongoDB, Cassandra, Accumulo, etc.
Development or administration on Web or cloud platforms like Amazon S3, EC2, Redshift, Rackspace, OpenShift etc.
Development/scripting experience on Configuration management and provisioning tools e.g. Puppet, Chef
Web/Application Server & SOA administration (Tomcat, JBoss, etc.)
Development, Implementation or deployment experience on the Hadoop ecosystem (HDFS, MapReduce, Hive, Hbase)
Experience on any one of the following will be an added advantage:
Hadoop integration with large scale distributed data platforms like Teradata, Teradata Aster, Vertica, Greenplum, Netezza, DB2, Oracle, etc.
Proficiency with at least one of the following: Java, Python, Perl, Ruby, C or Web-related development
Knowledge of Business Intelligence and/or Data Integration (ETL) operations delivery techniques, processes, methodologies
Exposure to tools data acquisition, transformation & integration tools like Talend, Informatica, etc. & BI tools like Tableau, Pentaho, etc.
Linux Administrator certified.
Prerna just got her resume reviewed and career guidance from a Principal Engineering Manager at Microsoft! See how.