Instructor Information

John Sheppard

Work Phone: 406-994-4835

Dr. Sheppard is a Norm Asbjornson College of Engineering Distinguished Professor in the Gianforte School of Computing at Montana State University and is a former Adjunct Professor in the Computer Science Department at Johns Hopkins. His research interests include model-based and Bayesian reasoning, reinforcement learning, game theory, and fault diagnosis/prognosis of complex systems. He is a Fellow of the IEEE, elected "for contributions to system-level diagnosis and prognosis."

Dr. Sheppard received his BS in computer science from Southern Methodist University in 1983. Later, while a full-time member of industry, he received an MS in computer science in what is now Johns Hopkins Engineering for Professionals (1990). He continued his studies and received his Ph.D. in computer science from Johns Hopkins in the day school (1997), completing a dissertation on multi-agent reinforcement learning and Markov games.

Prior to entering academia full time, Dr. Sheppard was a member of industry for 20 years. His prior position was as a research fellow at ARINC Incorporated. Dr. Sheppard became a member of the EP faculty in 1994 where he teaches courses in machine learning and population-based algorithms. He also mentors independent studies and advises several graduate students. In 2022, he received the Provost’s Award for Graduate Research and Creativity Mentoring at Montana State University, which recognizes excellence in advising MS and PhD students.

Course Information

Course Description

EN.605.649 - Introduction to Machine LearningAnalyzing large data sets (“Big Data”), is an increasingly important skill set. One of the disciplines being relied upon for such analysis is machine learning. In this course, we will approach machine learning from a practitioner’s perspective. We will examine the issues that impact our ability to learn good models (e.g., inductive bias, the curse of dimensionality, the bias-variance dilemma, and no free lunch). We will then examine a variety of approaches to learning models, covering the spectrum from unsupervised to supervised learning, as well as parametric versus non-parametric methods. Students will explore and implement several learning algorithms, including logistic regression, nearest neighbor, decision trees, and feed-forward neural networks, and will incorporate strategies for addressing the issues impacting performance (e.g., regularization, clustering, and dimensionality reduction). In addition, students will engage in online discussions, focusing on the key questions in developing learning systems. At the end of this course, students will be able to implement and apply a variety of machine learning methods to real-world problems, as well as be able to assess the performance of these algorithms on different types of data sets. Prerequisite(s): EN.605.202 – Data Structures or equivalent.

Prerequisites

EN.605.621 OR EN.685.621 Foundations of Algorithms

Course Goal

To develop broad understanding of the issues in developing and implementing machine learning algorithms and systems, especially as they related to modern, data-intensive problems.

Course Objectives

  • By the end of the course, students should be able to:
    Determine the inductive bias of learning methods and how that bias potentially affects learning performance.
  • Differentiate between classification and function approximation (regression) in learning.
  • Assess the empirical performance of machine learning algorithms on different types of data sets.
  • Implement and apply a variety of machine learning methods to real-world problems.

When This Course is Typically Offered

Currently, this course is offered by Dr. Sheppard only online. As of 2019, multiple sections of the course are offered Fall, Spring, and Summer semesters.

Syllabus

  • Introduction to Machine Learning
  • Bayesian Decision Making and Parametric Models
  • Nonparametric Learning
  • Basis Functions and Mixture Models
  • Decision Tree Induction
  • Rule Induction
  • Linear Models
  • Linear Networks
  • Multi-Layer Neural Networks
  • Dimensionality Reduction
  • Alternative Neural Network Models
  • Markov Decision Processes and Reinforcement Learning
  • Unsupervised Learning and Clustering
  • Temporal Difference Methods in Reinforcement Learning

Student Assessment Criteria

Five Small Group Discussions 15%
Six Muddy Point Discussions 15%
Six Short Quizzes 10%
Five Programming Projects 60%

In addition to the programming assignments, class participation is a critical part of this course. This is why 30% of the grade is based on muddy point discussions and small group discussions. Grading is based on timeliness, frequency of posting, and substantiveness of the information posted.

Computer and Technical Requirements

It is expected that each student is proficient in at least one higher-level language. For machine learning, popular languages include Java, Python, and C#. For the programming assignments, only basic libraries are permitted to be used. Libraries such as scikit-learn, weka, RapidMiner, MLPack, Keras, Theano, TensorFlow, PyTorch, or similar are not permitted under any circumstances. No programming language is specified; however, it is recommended that all assignments be completed in Java, C#, or Python. Languages such as MATLAB and R are not permitted, except to support analysis of results of the experiments run.

Participation Expectations

Class participation takes place in two ways:

  • Small group discussions
  • Muddy point exercises 

Small group discussions take place in the "Groups" area of blackboard. Each small group is different from the muddy group and consists of three or four class members. For the discussion, the instructor will have posted an open-ended question in the class discussion forums. Students then engage in the discussion within the private discussion forum for their group. Each student is required to provide at least five substantive posts in response to the question and other posts from students in the group. These five posts must occur over at least three distinct days of the week. There are seven small group discussions that occur during odd numbered modules.

Muddy group exercises consist of each member of the class posting a unique point of confusion about the content delivered during the current module. Students are assigned to muddy groups where their "muddy buddies" respond to the muddy point exercise. These groups are either pairs or triples. If triples, response must utilize a round-robin pattern so that everyone in the group responds to exactly one of the posts by a member of their group. Muddy points are posted by the end of Day 3 of each module, and muddy responses are required by the end of Day 5 of the module. There are seven muddy point exercises that occur during even numbered modules.

Textbooks

Textbook information for this course is available online through the MBS Direct Virtual Bookstore.

Course Notes

There are no notes for this course.

Final Words from the Instructor

This courses has a high workload; although, recently the number of assignments was reduced from six to five. Even so, each student will receive extensive hands-on experience implementing and analyzing the behavior of several machine learning algorithms. They will also gain experience dealing with real-world data sets and their associated real-world issues (missing data, noisy data, conflicting information, etc.).

The class also assumes a fair amount of mathematical background of students. It is strongly recommended that students have a solid foundation in calculus (through multi-variate), linear algebra, discrete mathematics, probability, and statistics. That said, for those interested in the Artificial Intelligence degree, you will see the degree program referring to this course as "theoretical." It is not. It is algorithmic in nature (emphasizing algorithm implementation) but with a strong orientation towards the practitioner.

Term Specific Course Website

http://blackboard.jhu.edu

(Last Modified: 01/11/2022 05:55:51 PM)