Information systems, such as question answering systems and web search engines, increasingly rely on crowdsourced knowledge bases to answer questions and to display important information about entities. While crowdsourcing enables the collection of vast amounts of information, it also brings along the problem of vandalism and damaging contributions. In this thesis, we focus on Wikidata, the largest structured, crowdsourced knowledge base on the web, and develop novel machine learning-based vandalism detectors to reduce the manual reviewing effort. To this end, we carefully develop large-scale vandalism corpora, vandalism detectors with high predictive performance, and vandalism detectors with low bias against certain groups of editors. We extensively evaluate our vandalism detectors in a number of settings, and we compare them to the state of the art represented by the Wikidata Abuse Filter and the Objective Revision Evaluation Service by the Wikimedia Foundation. Our best vandalism detector achieves an area under the curve of the receiver operating characteristics of 0.991, significantly outperforming the state of the art; our fairest vandalism detector achieves a bias ratio of only 5.6 compared to values of up to 310.7 of previous vandalism detectors. Overall, our vandalism detectors enable a conscious trade-off between predictive performance and bias and they might play an important role towards a more accurate and welcoming web in times of fake news and biased AI systems.