As the scale of artificial neural networks continues to expand to tackle increasingly complex tasks or improve the prediction accuracy of specific tasks, the challenges associated with computational demand, hyper-parameter tuning, model interpretability, and deployment costs intensify. Addressing these challenges requires a deeper understanding of how network structures, such as learning accuracy and resilience, influence network performance. Here, we report results from a large sample size of 882,000 motifs, the functional role of incoherent and coherent 3-node motifs — in shaping overall network performance. Analyzing the static and dynamic properties of these motifs, we quantify these motifs' representational capacity, numerical stability, and flexibility. Our findings reveal that incoherent loops exhibit superior representational capacity and numerical stability. Testing these predictions from the noise-resilience analysis, we demonstrate that incoherent neural networks exhibit greater resilience to noise during training and in environmental perturbations. We show the importance of incoherent loops in the classical balance reinforcement learning task and its generalization to biological, chemical, and medical applications. This work shows the functional impact of structural motif differences on the performance of artificial neural networks, offering foundational insights for designing more resilient and accurate networks.
You may clone this code repository (using the following command line) or just download this code repository with the ZIP format.
git clone https://github.com/HaolingZHANG/MotifEffect.git
It requires a python version at least 3.7, and some well-established libraries listed in requirements file. Once all the prerequisite libraries are properly installed, setting up this repository is virtually instantaneous. In addition, this library does not have non-standard hardware requirements.
In this work, four experiments are executed. Among them, we
- analyze trade-off tendencies between representational capacity and numerical stability of two types of loops, using colliders as a benchmark, based on gradient descent strategy and motif-associated Lipschitz constant.
- analyze the origin of these trade-off tendencies using maximum-minimum loss search.
- analyze training efficiency and robustness of neural networks produced by the classical neuroevolution method (i.e. NEAT) and its three variants based on different motif generation tendency.
- analyze failure situation of neural networks produced by NEAT and its three variants in three real-world applications.
In this code repository, all experimental designs and parameter settings are clear (see run_1_tasks.py and NEAT configure description). In just a few months, you can definitely obtain our consistent or similar (due to randomization) raw files in raw folder through executing the script. Meanwhile, all the figures and videos are created by show_main.py, show_supp.py and show_video.py which means that there is no anthropogenic data filtering or adjusting in the presentation.
Notably, if you want to repeat the entire experiment, you can run the script in the following order:
The raw data amounts to approximately 49.8 GB, will be made available after the publication of the corresponding article (currently accessible only to reviewers). To repeat all the experiments, using 11th Gen Intel(R) Core(TM) i7-11370H @ 3.30GHz, you may need several months.
├── effect // Source codes of 3-node motif effect experiments.
│ ├── __init__.py // Exhibition of class and method calls and implementation of monitor class.
│ ├── networks.py // Motif class based on PyTorch.
│ │ ├── RestrictedWeight // Restricted weight module.
│ │ ├── RestrictedBias // Restricted bias module.
│ │ ├── NeuralMotif // Neural Motif module.
│ ├── operations.py // Basic operations.
│ │ ├── prepare_data // Prepare database through the range of variable and sampling points for both x and y.
│ │ ├── prepare_data_flexible // Prepare database through the flexible range of variable and sampling points.
│ │ ├── prepare_motifs // Prepare motif based on the selected parameters.
│ │ ├── calculate_landscape // Calculate the output landscape of the selected motif.
│ │ ├── calculate_values // Calculate the output values of the selected motif.
│ │ ├── calculate_gradients // Calculate the gradient matrix of the selected motif.
│ │ ├── detect_curvature_feature // Detect the curvature feature of a given landscape.
│ │ ├── generate_motifs // Generate qualified motif with specific requirements.
│ │ ├── generate_outputs // Generate all output landscapes and the corresponding parameters based on the given parameter domain.
│ │ ├── calculate_differences // Calculate norm differences between motif landscapes.
│ ├── robustness.py // Robustness-related operations.
│ │ ├── evaluate_propagation // Evaluate the error propagation through the selected motif.
│ │ ├── estimate_lipschitz // Estimate the Lipschitz constant of the output signals produced by selected motif.
│ │ ├── estimate_lipschitz_by_motif // Estimate the Lipschitz constant from a selected motif.
│ ├── similarity.py // Similarity-related operations.
│ │ ├── execute_escape_processes // Execute the escape process for multiple pairs of an escape motif and several catch motifs.
│ │ ├── execute_catch_processes // Execute the catching process for referenced motifs and several catch motifs.
│ │ ├── maximum_minimum_loss_search // Find the maximum-minimum L2-norm loss (as the representation capacity bound) between source motif and target motifs.
│ │ ├── minimum_loss_search // Train the target motif to achieve the source motif and find the minimum L2-norm loss between the two motifs.
├── practice // Source codes of neuroevolution experiments.
│ ├── __init__.py // Exhibition of class and method calls.
│ ├── agent.py // Trained agent classes.
│ │ ├── DefaultAgent // Default trained agent.
│ │ ├── NEATAgent // Trained agent for the NEAT method and its variants.
│ │ ├── create_agent_config // Create training configure of agent.
│ │ ├── obtain_best // Obtain the best genome in the specific task.
│ │ ├── train_and_evaluate // Train and evaluate agents in a given NEAT task.
│ │ ├── train // Train an agent in a given NEAT task.
│ │ ├── evaluate // Evaluate an agent in a given NEAT task.
│ ├── evolve.py // Neuroevolution method variation.
│ │ ├── AdjustedGenome // Adjusted Genome, agent of NEAT method (prohibiting the appearance of loops based on the given setting).
│ │ ├── AdjustedGenomeConfig // Adjusted Genome configuration (prohibiting the appearance of loops based on the given setting).
│ │ ├── AdjustedReproduction // Adjusted NEAT method (prohibiting the appearance of loops based on the given setting).
│ │ ├── create_adjacency_matrix // Create the adjacency matrix from the given genome and its corresponding configuration.
│ ├── motif.py // Motif-related operations.
│ │ ├── acyclic_motifs // Dictionary of 3-node acyclic motifs, i.e., incoherent loops, coherent loops, colliders.
│ │ ├── motif_matrix_to_fingerprint // Change the type from the motif to the fingerprint for saving.
│ │ ├── fingerprint_to_motif_matrix // Change the type from the fingerprint to the motif for motif calculating.
│ │ ├── motif_matrix_to_connections // Change the type from the motif matrix to the graph connections for agent training.
│ │ ├── connections_to_motif_matrix // Change the type from the graph connections to the motif matrix for motif calculating.
│ │ ├── is_same_motif // Judge whether the two motifs are the same.
│ │ ├── obtain_motif // Obtain a candidate motif of specific nodes in the adjacency matrix.
│ │ ├── compliance_motif_specification // Test the rationality of the obtained motif.
│ │ ├── collect_motifs // Collect all the rational motifs from the adjacency matrix.
│ │ ├── count_motifs_from_adjacency_matrix // Count the motif frequencies from a given adjacency matrix.
│ │ ├── detect_motifs_from_adjacency_matrix // Detect the given motifs from an adjacency matrix
│ ├── noise.py // Noise modules.
│ │ ├── NormNoiseGenerator // Noise generator based on the norm.
│ ├── task.py // Task modules.
│ │ ├── BasicTask // Abstraction task.
│ │ ├── GymTask // OpenAI gym task.
│ │ ├── NEATCartPoleTask // CartPole task for NEAT method and its variants.
│ │ ├── SupervisionTask // Supervision task for NEAT method and its variants.
├── works // Experiment module of this work.
│ ├── confs // Configuration folder of neuroevolution tasks.
│ ├── data // Painting data folder of all the experiments.
│ ├── raw // Raw data folder of all the experiments.
│ ├── show // Painted figure folder of all the experiments.
│ ├── temp // Temp folder to temporarily save all the figures in Video S1 - S8.
│ ├── __init__ // Preset parameters in the experiments.
│ ├── run_1_tasks.py // Run all 3 experiments for this work.
│ ├── run_2_packs.py // Package all the presented data from the experimental results.
│ ├── show_main.py // Paint figures (in the main text) from the generated data.
│ ├── show_supp.py // Paint figures (in the supporting materials) from the generated data.
│ ├── show_video.py // Create videos (in the supporting materials) from the generated data.
├── LICENSE // License of this library (GPL-3.0 license).
├── README.md // Description document of this library.
├── requirements.txt // Basic library requirements of this library.