In this thesis, we study two soft clustering approaches: fuzzy K-means clustering and model-based clustering with Gaussian mixture models. In contrast to the popular K-means hard clustering, there are hardly any algorithms for these approaches that provide guarantees on the quality of the computed clusterings. In the first part of this thesis, we present the very first approximation algorithms for the fuzzy K-means problem: We show how the so-called superset sampling technique can be applied to compute an approximation for the fuzzy K-means problem. Moreover, we show that there is a refined version of a coreset construction for the K-means problem that yields a coreset for the fuzzy K-means problem. Furthermore, we use this construction to derive another approximation algorithm for the fuzzy K-means problem. Finally, we also consider alternative notions of fuzziness and generalize all of our results to a large class of soft clustering problems. In the second part of this thesis, we consider a model-based clustering approach, namely, the method of maximum likelihood for estimating Gaussian mixture models. Our contribution is threefold: First, we compare two popular heuristics with one another, namely the expectation-maximization algorithm and a stochastic variant thereof. Second, we tackle the problem of initializing the expectation-maximization algorithm. We propose two new initialization methods. Thereby, we aim to close the gap between simple, but rather unreliable, methods and complex methods, whose performance crucially depends on the right choice of hyperparameters. Third, we initiate the theoretical analysis of a constrained version of the maximum likelihood estimation problem, which is known as the soft K-means problem.