In this project, the extend to which we can equip artificial agents with moral reasoning capacity is investigated. Attempting to create artificial agents with moral reasoning capabilities challenges our understanding of morality and moral reasoning to its utmost. It also helps philosophers dealing with the inherent complexity of modern organizations. Modern society with large multi-national organizations and extensive information infrastructures provides a backdrop for moral theories that is hard to encompass through mere theorising. Computerized support for theorising is needed to be able to fully grasp and address the inherent complexity. Using moral reasoning capacity will help us addressing the challenges that technological artefacts pose. They do not only contain information about us, they start to act on our behalves. With the increasing autonomy comes an increased need to ensure that their behaviour is in line with what we expect from them. To investigate and address these issues a laboratory for philosophy is outlined in this project: SophoLab. It consists of a methodology; a framework of modal logic, DEAL; and multi-agent software systems. SophoLab provides the basis for an experimental, computational philosophy. Its viability and usefulness are demonstrated through several experiments.