Home page
AI cannot cure cancers.
(Fig.1) AI's rate of predicting molecules bound to target proteins is too bad, useless.
The 1st, 3rd, 7th, last paragraphs of this hyped news (8/11/2025) say
"Traditional drug development methods involve identifying a target protein (e.g., a cancer cell receptor) that causes disease, and then searching through countless molecular candidates (potential drugs) that could bind to that protein and block its function. This process is costly, time-consuming, and has a low success rate" ← AI is useless for drug discovery
"The research team... has developed an AI model named BInD, which can design and optimize drug candidate molecules tailored to a protein's structure alone—without needing prior information about binding molecules (= experimental training dataset is needed, though ). The model also predicts the binding mechanism between the drug and the target protein." ← fake news
"The research team explained that the AI operates based on a "diffusion model"—a generative approach where a structure becomes increasingly refined from a random state. This is the same type of model used in AlphaFold 3"
"it is expected (= just speculation, still useless ) to enable faster and more reliable drug development."
↑ This research paper ↓
p.6-left-last-paragraph says "BInD achieved the highest success rate across all three criteria, with 4.7% of generated molecules passing every filter" ← This new AI's success rate of predicting molecules binding to target proteins is very low = only 4.7%, which is useless, unable to predict side effects.
p.8-Figure 4 shows this research used only virtual computer docking screening method (= Vina docking score, this-p.1-introduction~p.2 ), and did Not experimentally confirm whether the predicted molecules actually bound to target proteins.
p.12-left-Dataset says "We used the CrossDocked2020 dataset to train and test BInD. "
↑ This CrossDocked2020 dataset is based on experimental protein structure = PDB (= this-p.6-CrossDocked2020 dataset ) and some unreliable data (= this-p.3-4th-paragraph ).
So this new hyped AI's prediction rate of molecules was very bad (= only 4.7% ) based on virtual screening method, which cannot confirm the predicted molecules really bound to proteins.
The present experimental methods for determining protein structures such as X-ray crystallography, NMR, cryo-electron microscopy are useless, unable to clarify most proteins' real structures nor functions, which is why today's AI trained on such useless datasets is impractical, too.
This-7th-paragraph (6/25/2025) says
"Breathless headlines promise that AI will slash drug development timelines... these claims consistently fall flat. Even if we could design perfect binders (which we can't do ), that barely moves the needle.."
↑ Only multi-probe atomic force microscopes can clarify real protein structures, which is hampered by the useless quantum mechanical shapeless models
(Fig.2) AI research is hyped, unreliable.
The 5th, 7th, last paragraphs of this hyped news (8/13/2025) say
"That's where PepMLM (= this research new AI method ) takes a different approach—instead of relying on structure, the tool uses only the protein's sequence to design peptide drugs."
"In lab tests, the team showed that PepMLM could design peptides—short chains of amino acids—that stick to disease-related proteins and, in some cases, help destroy them. These included proteins involved in cancer, reproductive disorders, Huntington's disease, and even live viral infections." ← hype
"Our ultimate goal (= still unrealized ) is a general-purpose, programmable peptide therapeutic platform—one that starts with a sequence and ends with a real-world drug,"
↑ This or this research paper ↓
p.2-right-1st-paragraph say
"We trained PepMLM using existing peptide–protein binding data
sourced from the recent PepNN training set (= based on experimentally-obtained PDB protein structures, this-p.8-left-methods-Datasets ) and the gold standard Propedia dataset"
p.4-left-2nd-paragraph says
"defining a successful hit when a
designed binder achieved a higher ipTM score than its corresponding test binder, indicating AlphaFold's prediction of stronger binding
affinity. Our analysis revealed hit rates of 38% for PepMLM and 29%
for RFdiffusion
↑ This artificial definition of hit (= success ) rate based on comparison with "test binders" (= Not based on the designed binders' hit rates or ipTM scores themselves ) is doubtful.
↑ The actual hit (= successful prediction ) rate is much worse, or lower (< 10% ), if we pick up cases of high ipTM scores (> 0.8, because ipTM scores less than 0.8 are unreliable, bad prediction, this-4th-paragraph, this-p.14-290 ) in designed binders by PepMLM.
↑ This-Fig.1d-- the ratio of red dots of PepMLM's ipTM score > 0.8 is much smaller (< 10% ) than 38%
↑ This Peer review file ↓
p.2-last-says " However, in my opinion the experiments that are presented in the paper do Not unequivocally proof the success of the computational methodology"
p.3-1st-paragraph says "the authors need to include experiments where direct binding is measured between the peptides and the targets" ← Whether peptides designed by this AI PepMLM really bound to the desired target protein's sites or not is uncertain.
p.3-2nd-paragraph says "First of all, ELISA (= using antibodies vaguely bound to target proteins to estimate degraded proteins ) is Not a reliable test to measure binding, this is a method that frequently shows artifacts due to non-specific protein binding" ← unreliable AI research
p.3-Minor-3rd-paragraph says "Why is the hit rate defined as predictions where the designed sets exhibited higher ipTM score compared to the test set score... I am not sure that the definition of hit rate is meaningful"
So this AI research trying to predict some peptides binding to target proteins is doubtful, still impractical.
Feel free to link to this site.