This tutorial presents different methods for protecting confidential data on clients while still allowing servers to train models. In particular, we focus on distributed deep learning approaches under the constraint that local data sources of clients (e.g. photos on phones or medical images at hospitals) are not allowed to be shared with the server or amongst other clients due to privacy, regulations or trust. We describe such methods that include federated learning, split learning, homomorphic encryption, and differential privacy for securely learning and inferring with neural networks. We also study their trade-offs with regards to computational resources and communication efficiency in addition to sharing practical know-how of deploying such systems.