No algorithms exist that generate comprehensive, statistically sound reliability information for neural networks. Reliability of neural nets is affected by: (I) the amount of training data, (2) input novelty, (3) data consistency, and (4) time-varying system dynamics. Confidence measures can gauge network reliability by indicating when sufficient training data has been presented for good generalization, when a neural network's output should be trusted, and when periodic retraining should occur in slow time-varying dynamic systems.They can also help automate neural network controllers in a closed-loop environment. Confidence generation algorithms complement virtually all neural nets and can help their integration with existing controllers into production environments. Researchers are developing and testing confidence algorithms for each of the four independent factors affecting reliability. This research is based on established theories and innovative ideas, using artificial and realworld data.Commercial Applications:Confidence generation algorithms potentially can be used in virtually every real-world neural network solution including those in process control, financial, retail, insurance, and imaging. They will especially benefit neural network controllers demanding high accuracy in process control and optimization by allowing them to be safely used in production with existing successful controllers.