Research in project management is plagued by the lack of good data. What business will readily provide data for projects that have not been successfully completed? Many organizations consider their project data to be proprietary and perhaps some may even consider their project implementation processes a critical success factor. Of course, that assumes that they are consistently successful with project implementation, something rare in most organizations.
When building predictive models and eventually tools using artificial intelligence (AI) for project management, the lack of data is a serious issue. The lack of data leads to a problem known as ‘underfitting’. There is simply not enough data for an algorithm to properly learn the correlations and subsequently make accurate predictions. Also, the model itself may be too simple and requires a more complex algorithm. A truly successful model should consist of more than historical data and include current project data or any results that provide feedback into the model that provides a continuous improvement of prediction accuracy.
However, at this point it is too early to determine if projects need to be organized by common factors such as function or objective rather than having a single algorithm that fits all projects. Will we need to resort to building models of AI that fit only subsets of projects such as construction projects, software implementation projects or business process redesign projects? In the book, ‘Superintelligence,’ Nick Bostrom identifies two streams of AI. One is narrow AI which focuses on achieving a specific objective and the other category is strong AI which is known as artificial general intelligence or whole brain emulation. The field of project management faces a similar issue for machine learning tools. Can a model be developed that predicts project success for all projects, regardless of factors such as size, function and purpose, or will it require several subsets of project AI tools to be developed?