This thesis studies the design and quality of energy-efficient scheduling algorithms, especially with respect to speed scaling. Speed scaling finds practical application in techniques like Intel's SpeedStep and AMD's PowerNOW, which allow a processor to change its speed during runtime. This way, it may use low, energy-efficient speeds during the majority of time and only enter a less energy-efficient mode when the workload becomes too high to guarantee a sufficient quality of service. Theoretical investigations of such models were initiated by Yao, Demers, and Shenker [FOCS:1995]. They combined deadline scheduling with speed scaling, striving to schedule all jobs while minimizing the total energy consumption.The results presented in this thesis align with the rich body of literature on variants of this model. The main results are presented in four parts (Chapters 3 to 6). Parts one and two study different price-collecting variants of the original problem, where the scheduler may miss deadlines if it pays a job-specific penalty. Part three replaces the deadline constraints by the flow time objective. The last part introduces a new type of resource constrained scheduling. While it does not directly consider energy, it might be a first step towards a theoretical model where the energy source is shared between several processors.