[Frontiers in Bioscience E3, 1061-1078, June 1, 2011] |
|
|
An introduction to quantum chemical methods applied to drug design Marco Stenta, Matteo Dal Peraro Laboratory for Biomolecular Modeling, Institute of Bioengineering, School of Life Sciences, Swiss Federal Institute of Technology - EPF Lausanne CH-1015 TABLE OF CONTENTS
1. ABSTRACT The advent of molecular medicine allowed identifying the malfunctioning of subcellular processes as the source of many diseases. Since then, drugs are not only discovered, but actually designed to fulfill a precise task. Modern computational techniques, based on molecular modeling, play a relevant role both in target identification and drug lead development. By flanking and integrating standard experimental techniques, modeling has proven itself as a powerful tool across the drug design process. The success of computational methods depends on a balance between cost (computation time) and accuracy. Thus, the integration of innovative theories and more powerful hardware architectures allows molecular modeling to be used as a reliable tool for rationalizing the results of experiments and accelerating the development of new drug design strategies. We present an overview of the most common quantum chemistry computational approaches, providing for each one a general theoretical introduction to highlight limitations and strong points. We then discuss recent developments in software and hardware resources, which have allowed state-of-the-art of computational quantum chemistry to be applied to drug development. 2. INTRODUCTION: FROM DRUG DISCOVERY TO DRUG DESIGN The need to provide medicines can be traced back to the roots of humankind, when religious beliefs and common sense guided all attempts to cure disease. For many centuries during the age of botanicals (pre-1800s), medical care consisted of eating, drinking, or applying substances. Since the strategies for identifying new drugs were driven by empirical observation and intuition, each success was necessarily reliant on fortuitous accident. Pharmacopoeia regulating composition and dosage of the available drugs were introduced in Egypt as early as 16th Century BC (Ebers Papyrus, Edwin Smith Papyrus) and, later (5th Century BC) improved and refined in Greece (Corpus Hippocraticum); yet the first successful attempts to individuate illnesses by their symptoms only date back to the 19th Century. Since the cures were almost exclusively symptomatic, only a few authentic drugs were discovered in this period (quinine, cocaine, digitalis and, later, aspirin). The diffusion of the "microbial theory of diseases" at the turn of the last century was a key development in biology and medical science, paving the way for the advances of the 1920s and 1930s. The almost serendipitous discovery of penicillin by Fleming provided the basis for the rational use of substances as specific drugs to cure specific illnesses. The antibiotic era began in the early 1940s, heralded by advances in synthetic organic and analytic chemistry, technical improvement of instruments and tools, a new multidisciplinary approach to studying disease, and an increased demand during World War II for penicillin derivatives and other antibiotics. Although the drug discovery process was becoming less serendipitous, the main approach in the 1950s and 1960s was still based on trial-and-error. Whenever a new active substance was discovered in nature, and its active principle purified, characterized and tested, entire libraries of similar compounds would then be synthesized, tested, and examined for higher activity levels and fewer side effects. In the 1970s, cancer and heart diseases became the greatest targets for medical research. Drug discovery strategies based on massive and systematic scans of compound libraries were thus abandoned in favor of newer and more efficient approaches. To deal with such intrinsically multifactorial diseases, identifying specific targets appeared the only route to success(1). Recent advances in biochemistry, chemical biology, and biotechnology allowed scientists to look for drug targets at a subcellular level, which was desirable because many diseases were associated with the activity of nanoscale objects. The recent discoveries concerning DNA and protein structure dramatically changed landscape for those interested in finding new therapies. As soon as molecular and atomistic details about DNA replication and enzyme activity become available, the quest for molecules capable of interfering with those processes began in earnest. Drugs would not be discovered, but designed to exert an action on a specific target selected from thousands of potential targets in the organism's metabolic pathway. The atomistic structure of DNA and its complexes with proteins are now studied to identify molecules capable of interfering with replication (anticancer drugs). The active sites of enzymes are also studied in detail so that their activity can be better prevented by inhibitor molecules (antibacterial, anti-inflammatory drugs) (2, 3). In this new era of drug development, computer sciences play many significant roles (4, 5). The most obvious is the classification, storage, and effective retrieval of the exponentially increasing amount of data coming from different research fields. Several drug development strategies are based on scanning databases of the properties of known compounds. Moreover, hardware improvements and the availability of new programming languages have allowed for some physical theories and methods to be employed in chemistry and biology. Computable models directly based on (or derived from) Quantum Mechanics (QM) are used to study the physical and chemical properties of molecules. These methods can provide insights at the molecular and atomistic level, which are usually difficult to obtain by experimental techniques. The reaction mechanism of enzymes, for instance, can be elucidated using these methods, providing a rational guide in the design of specific inhibitors. The binding affinity of drug candidates with proteins and DNA can also be tested in silico, with considerable gain in both time and money. The increased reliability of QM-based methods means they are crucial drug development tools. The improved quality of the available molecular models and their widespread use in drug design are indeed contributing to a new era of molecular medicine. 3. MOLECULAR MODELLING In recent decades, molecular modeling techniques have evolved with unexpected speed, becoming ever more reliable. As theoretical chemists developed new theories to describe reality, computational chemists were able to efficiently implement these models in parallel. Nowadays many experimental chemists, working in either organic or physical chemistry, can easily use modern software for both research and teaching (6). The systems that can be considered by molecular modeling (6-8) range from small isolated molecules to large biological macromolecules (like proteins and DNA). Most molecular modeling studies involve three stages. In the first stage, the most appropriate level of theory must be chosen to describe the system in question. The balance between accuracy and speed must reflect the nature of the system as well as its size. The two most common computational approaches are based on quantum mechanics (QM) or molecular mechanics (MM). The former is based on an approximate solution of the Schrödinger equation, the latter on a classical description of the atoms following Newton's law. Both computational approaches provide the total energy for a given atomic configuration and can provide information about the system's thermodynamic properties. The second stage of a molecular modeling study is the actual calculation, using the chosen computational procedure to calculate the best geometrical arrangement (the one with the lowest energy), the reaction pathway, the behavior of the system as a function of time, and/or the value of relevant observables that are useful for rationalizing experimental data or predicting the behavior of the system in question. The third stage is the analysis of the results, which leads to the construction of an interpretative and predictive model. 4. QUANTUM CHEMISTRY-BASED METHODS The so-called ab initio or first-principles methods are usually considered the most accurate and consistent (6) because they provide the best physical approximation of the system. The term ab initio implies that these methods, based on the laws of quantum mechanics, require the knowledge of just a few fundamental constants: the electron mass, the electron and nuclear charges, and the values of fundamental physical constants, such as Planck's constant and the speed of light. The time-independent Schrödinger equation is used to find stationary, fixed states and is reported in Equation 1, where Equation 1 Equation 2 Since the ratio of electron-mass to proton-mass is about 1/1836, the electronic motion is much faster than the nuclear motion. Consequently, the electrons of a molecule can rapidly adjust to any change in the nuclear positions, and the energy of a molecule in its ground state can be considered a function of the nuclear coordinates only. This is the rationale behind the Born-Oppenheimer (BO) approximation (8, 9), which allows us to separate electronic and nuclear degrees of freedom. The total Hamiltonian can be split into its nuclear and electronic terms, as reported in Equations 3 and 4, where the indices Equation 4 According to the Molecular Orbital (MO) approach, the molecular wavefunction can be written in terms of one-electron functions (spin-orbitals) Equation 5 Each molecular orbital Equation 6 For decades, the Hartree-Fock (11) (HF) method has been one of the most widely used computational approaches to determining the wavefunction within the molecular orbital model. The wavefunction is written in the form of a Slater determinant. The procedure is based on the use of the Hartree-Fock eigenvalues equations (Equation 7), where the Fock operator Equation 7 Equation 8 Equation 9 Equation 10 The solution of the Hartree-Fock equations, even in the tensorial Roothaan-Hall formulation (Equation 10) (12), is not trivial, since the Fock operator is a function of the molecular orbitals, which themselves correspond to the solutions of the eigenvalues equations. The algorithm used to solve the Roothaan equations is an iterative approach known as Self Consistent Field (SCF) procedure. According to this algorithm, an initial guess for the Fock matrix is built using the molecular orbitals obtained using approximated procedures based on the Extended Huckel theory (6) or on semiempirical methods (6). Then, the Roothaan-Hall equations are solved to give atomic coefficients for each orbital and the relative energies and, consequently, the total energy. The procedure is repeated using the updated coefficients to build a new Fock matrix and a new value of energy is obtained. The new total energy value is compared to the previous one. The procedure stops when this energy difference is smaller than a chosen threshold. In summary, the widespread HF-SCF approach relies on several approximations:
Nowadays more advanced approaches are preferred to HF, due to these intrinsic limitations. In fact, the missing electronic correlation effects of the HF-SCF method lead to large deviations with respect to experimental results. A number of approaches, usually denoted as post-Hartree-Fock methods, have been devised to include electron correlation in the multi-electron wave function, thus avoiding many of the HF pitfalls. One of these approaches, Møller-Plesset perturbation theory, (11, 13) treats correlation as a perturbation of the Fock operator (see below). Others methods expand the multi-electron wave-function in terms of a linear combination of Slater determinants, such as Multi-Configurational Self Consistent Field (MC-SCF) (7, 8) Configuration Interaction (CI) (11) Complete active space SCF (CAS-SCF) (7, 8). Both MC-SCF and CAS-SCF are considered the reference methods for the study of processes involving multiple electron states. Photoinduced chemical reactions are an important application field, as exemplified by studies on vision processes initiated by rodhopsin light absorption (14) and by investigations into DNA light absorption processes (15). Another important field of applications is the study of reaction mechanisms involving low-lying excited states, a common example being the studies on cytochromes: ubiquitous and versatile catalysts involved in respiration, waste product degradation and drug excretion. (16, 17) The insights provided by QM methods (18) into cytochromes (e.g. reaction mechanism, substrate specificity) have been important for understanding the pharmacokinetics of many drugs. 4.2. Møller-Plesset perturbation theory Møller-Plesset perturbation theory (MP) is one of the most popular post-Hartree-Fock ab-initio methods in quantum chemistry (the main idea was published in 1934 (13)). Rayleigh-Schrödinger perturbation theory (RS-PT) is used to improve the Hartree-Fock energy. MP2, MP3, MP4 acronyms are used to denote a truncation of the perturbation series up to the second, third, and fourth order, respectively. Although it is more computationally expensive than HF, the MP2 method is often used to treat systems where electronic correlation effects play a key role. For instance, weak interactions like dispersion (London) forces or π-stacking interactions between aromatic compounds can be properly described at the MP2 level. In several enzymes, substrate recognition and transition state stabilization are mediated by aromatic residues in the active pocket (19), thus methods capable of accounting for such interactions can be useful in the search for inhibitors capable of displacing the natural substrate by establishing tighter interaction with the active site pocket. 4.3. Density Functional Theory (DFT) DFT is somewhere between the ab initio and semiempirical approaches. According to its formalism, the basic quantity is not a many-body wavefunction, but the molecular electron density. The most common implementation of density functional theory is the Kohn-Sham (20) approach, where the intractable many-body problem of interacting electrons in a static nuclear electrostatic potential is reduced to a tractable problem of non-interacting electrons moving in an effective potential. This is composed of the nuclear potential and the effects of the interactions between the electrons, divided into exchange and correlation contributions. In Equation 11, the first term represents the electromagnetic interaction of the electron density with the external potential (it often corresponds to the Coulomb interaction between electrons and nuclei), the second represents the repulsion between the electron density and itself (Coulomb energy), the third term approximates the electronic kinetic energy (by expanding the density in a set of orbitals and computing the kinetic energy for the hypothetical system of non-interacting electrons), and the final term is the exchange-correlation functional which corrects the other terms. Equation 11 The major problem within KS-DFT is the modeling of exchange and correlation interactions. The oldest and simplest approximation, called Local Density Approximation (LDA), was based on the Thomas-Fermi model of a uniform electron gas. In local exchange-correlation functionals, correlation and exchange interactions are functions which depend only on the scalar value of the electron density at a given point in space (Equation 12). A simple exchange functional (Slater) (21) is reported in Equation 13: it accounts for stabilization of high electron density correcting the electron-electron energy, which is overestimated by the Coulomb term. Combining this exchange functional with that proposed by Vosko, Wilk and Nusair leads to the SVWN functional (which is synonymous with LDA). Equation 12 Equation 13 Second generation exchange-correlation functionals include not only functions of the scalar density, but also functions of the gradient of the density. They are also referred to as "non-local" as the gradient introduces a certain degree of non-locality in the energy expression. They should be more properly referred to as "gradient-corrected" (adopting the Generalized Gradient Approximation or GGA) because the exchange-correlation term is still an integral relating a defined energy contribution with the electron density of an infinitesimal volume. Since Becke's proposal (22) of a gradient-corrected exchange functional, many attempts have been made to improve the reliability of the GGA DFT by developing correlation functionals with parameters obtained by fitting experimental data or generated to reproduce well-known physical principles. A popular GGA functional combines the Becke exchange expression with the Lee, Yang Parr correlation formulation(23), giving the commonly used BLYP functional. Other functional forms for both exchange and correlation expressions have been proposed and all their possible combinations constitute the large family of GGA DFT functionals. The next GGA functional generation, called meta-GGA, like the Tao, Perdew, Staroverov and Scuseria (TPSS),(24) include two more functions of the density, the Laplacian of the total density (or of the densities of spin-up and -down electrons) and the sum of the kinetic energy densities of the Kohn-Sham orbitals. The observation that atomization energies (i.e. the energy required to break the molecule completely into its component atoms) are underestimated by HF method, and overestimated by both LDA and GGA methods, suggested some combined treatment to improve the results. Many sophisticated hybrid functionals have been developed. An early formulation (25) adopted a three-parameter functional where the exchange-correlation energy was expressed as a combination of the local exchange-correlation energy, the HF exchange energy, and the gradient corrections to the exchange and correlation energies as shown in Equation 14, where Becke exchange (22) and Perdew et al. Correlation (26) are used. Substituting the Perdew correlation with that of LYP (23) leads to the well-known B3LYP functional. Equation 14 For studying large biomolecular systems, hybrid functionals like B3LYP are usually preferred to more accurate approaches like MPn or coupled cluster methods (6) because of a considerably lower computational cost, comparable to that of HF. Nevertheless, hybrid functionals have had little application for ab initio molecular dynamics: LDA and GGA functionals are used instead because of their lower computational cost. The use of the DFT method has become very popular in the last two decades, including among experimental chemists. New functionals are constantly developed to improve accuracy and performance in dealing with particular classes of systems (gas clusters, metal surfaces, metal complexes, radical containing systems, unbound complexes, etc...). Due to the limited transferability of some of the new functionals, the choice of the correct functional for a given system becomes a major issue. However a large number of publications have reviewed the main application domain of the various functionals (27-30), thus providing a valuable guide for the correct usage of the DFT method. 4.4. Dispersion force corrections in exchange-correlation functionals DFT functionals generally provide a poor description of dispersion forces, which play a fundamental role in many aspects of chemistry and biochemistry. For instance, π-stacking interactions are often central to the correct folding of proteins, in determining packing of DNA base pairs, in enzymes' substrate recognition and binding, etc. Current approximations of the gradient corrected XC functional prevent accurate description of π-stacking, since long-range electron correlation is not implicitly included in many functionals, which are thus unable to correctly reproduce the interactions between unbound chemical species. Fortunately, several recent advances allow researchers to partially overcome such problems. The so-called DFT-D schemes (31, 32) can correct the DFT energy using a damped dispersion term that is able to reproduce the correct asymptotic form -C6R-6. Empirical corrections to DFT are a significant improvement over standard DFT functionals with no added computational cost. The functional form of the empirical dispersion term can be easily parameterized for use with any existing DFT functional, substantially increasing the domain of applications of GGA functionals. Alternatively, there is promise in the new multi-atom-centered expansion schemes introduced by Roethlisberger and co-workers for correcting the approximate XC functionals in DFT, in which the total electronic density of a system can be described as a sum of the atomic densities corrected for the interatomic interactions. This scheme (atom-centered potentials, ACPs) has proven particularly efficient when applied to the correction of dispersion forces in DFT (33-36). Dispersion-corrected atom-centered potentials (DCACPs) can reproduce the asymptotic R−6 behavior when a sufficiently complete basis set is used. A set of DCACPs is currently available for all the biologically relevant elements (37). Other approaches involve reshaping the form of the exchange and correlation functionals, introducing new parameters taken from experiments or higher level computations, varying the amount of included HF exchange etc. A new family of functionals (MPW1B95, MPWB1K, M05, M05-2X, M06-2X, M08-HX and M08-SO) developed by Truhlar and co-workers (38-41) is proving particularly effective for studying biologically relevant systems (42, 43). A different approach is behind the family of functionals developed by Grimme (44). They comprise a mixture of DFT components and the MP2 correlation energy calculated with the DFT orbitals. Grimme referred to his functional as B2PLYP (45-48) (B88 exchange, 2 fitted parameters and perturbative mixture of MP2 and LYP); a version with improved performance (especially for weak interactions) is mPW2PLYP.(49) From the extensive calibration work, the new functionals appear to give better energetics and a narrower error distribution than B3LYP. Thus, the additional cost of the calculation of the MP2 energy may be well invested. 4.5. Semi-empirical methods Semi-empirical (SE) methods such as AM1 (50), PM3 (51), PM6 (52) and RM1 (53) are based on HF formalism but contain several approximations and include some parameters from empirical data and some from higher level calculations. Because of their extremely low computational cost, SE methods have been used to study large molecules and are still used to investigate the dynamics of organic and bioorganic systems. Their general performance is affected by the approximations adopted to decrease the computational cost; still, for some selected systems they provide reliable results due to the particular parameterization. Nevertheless, the use of SE methods, even for large systems, is steadily decreasing as computational resources and code optimization progress allow the use of more reliable and transferable methods like DFT. 4.6. Density Functional Tight-Binding DFTB is an alternative approach to the quantum chemical semi-empirical (6) methods. It corresponds to an approximate DFT scheme and is characterized by a computational speed similar to that of traditional semi-empirical quantum chemical methods (like AM1, and PM3), but requiring a smaller number of empirical parameters. The approximate DFTB method is derived from DFT by a second-order expansion of the DFT Kohn-Sham total energy with respect to charge density fluctuations. The zero-order approach is equivalent to a common standard non-self-consistent (Tight-Binding: TB) scheme, while at the second order, a transparent, parameter-free and readily calculable expression for generalized Hamiltonian matrix elements may be derived. DFTB can be seen as a tight binding method, parameterized from DFT. DFTB was augmented by a self-consistency treatment based on atomic charges in the so-called self-consistent charge density functional tight-binding (SCC-DFTB) method (54-57). The SCC-DFTB method has been applied to a large variety of problems in chemistry (58-60), physics (61, 62), materials science (54, 63), and biochemistry (64-70). During the past decade, it has been continually developed to improve its accuracy and transferability. 5. MOLECULAR MECHANICS APPROACH Computational approaches based on Molecular Mechanics (MM) allow researchers to compute the energy and properties of large molecular systems. The total energy of a system is simply derived from the sum of different energetic contributions (additive force field). The MM Hamiltonian (6-8) thus comprises several terms, each one taking into account the contribution arising from the system's bonding (stretching, bending, and torsional) and non-bonding interactions (van der Waals, Coulombic) (see Equation 15). Both stretching and bending contributions (first and second terms in Equation 15) are expressed by simple harmonic potentials. The torsional contribution is described by a cosine series (third term) to account for multiple conformational minima. The classical 6-12 Lennard-Jones potential (6) and the Coulombic function have been used to describe, respectively, the van der Waals and Coulombic non-bonding interactions (fourth and fifth terms of Equation 15). In the MM force field, the equilibrium bond lengths and angles ( Equation 15 To obtain reliable values from MM calculations, a new atom definition has been adopted: all atoms in a molecule are classified as different atom types not only on the basis of the atomic number (as in QM approaches), but also according to their chemical environment (bonded atoms). This has led to the development of different parameters for aliphatic and aromatic carbon atoms, for carbonyl or alcoholic oxygen atoms, and so on. A particular MM Force Field (73) is defined on the basis of the adopted functional form for the energy expression and the specific values of the chosen parameters for the available atom types. Force fields like OPLS (74), AMBER (75-77), GROMOS (78) and CHARMM (79) are widely used to study standard biomolecular systems (protein, DNA, and RNA), while corrections and extensions are continually developed and improved to treat carbohydrates, lipids, enzyme cofactors, and other molecules of relevant biological interest, including drugs and ions. Particularly relevant in the field of drug design are both the Generalized AMBER Force Field(GAFF) (80) and CHARMM General Force Field (CgFF) (81). This is because they provide suitable parameters for drug-like organic molecules, which can be readily parameterized for virtual screening studies. MM potentials are in fact widely used in molecular docking protocols (82-85) devised to score the conformation of ligands within the active site of an enzyme. The small computational cost of these potentials allows for fast screening of large libraries of chemical compounds. Moreover, MM potentials, when coupled to advanced simulation techniques (such as free energy calculations), can be efficiently employed in the lead optimization step, which is aimed at optimizing the potency of drug candidates for specific targets (86-88). 6. HYBRID QUANTUM MECHANICS/MOLECULAR MECHANICS METHODS Along the challenging pathway leading to the development of new drugs, the study of the structure and function of the targeted proteins is of paramount importance. While MM approaches are useful when investigating structural features of very large systems, they cannot be used to study reactive processes (when bond topology changes), as those take place at the enzymatic active site. In contrast, QM approaches, capable of dealing with both thermodynamics and kinetics of chemical reactions, cannot be used for large systems, like those of biological interest, because of their excessive computational demand. For many years, a partial solution involved building model systems comprising the sole reactive region and the nearest surrounding atoms. Using this approach, the reaction mechanism of several enzymatic reactions was elucidated, but the results had only a qualitative value since most of the environmental effects of the enzyme were overlooked and excluded from the model. Recently, the most commonly used approach to overcoming such limitations is the combination of DFT (or other QM methods) with classical force-field-based molecular mechanics (MM) in the so-called quantum mechanics/molecular mechanics (QM/MM) schemes. The system is partitioned into two or more regions treated with different levels of theory (Figure 1). The chemically relevant part of the system may be described by QM, while the remaining portion is treated with faster MM methods. The choice of the QM region is not trivial as solvent molecules, ions and parts of protein/DNA may be directly involved in the process of interest. In addition, a proper treatment of the long-range electrostatic interactions between the QM and MM regions is crucial for obtaining a reliable approach. Originally proposed in 1976 by Warshel and Lewitt, (89) many different QM/MM implementations have flourished in the last two decades, with such hybrid approaches successfully employed in both biological and material science fields (89-97). The gathered experience allowed researchers to define the most important issues that a QM/MM scheme should address to provide reliable results, with the treatment of the QM-MM boundary region being considered the most controversial (98-101). A more technical issue connected with the efficiency of QM/MM schemes is the actual implementation of the hybrid potential; several QM/MM implementations are constituted by MM extensions to existing QM codes such as the ONIOM (102) implementation included in the GAUSSIAN (103) suite of programs, or the hybrid approaches included in CPMD (104) and CP2K (105). MM codes also feature interfaces to (or embed) QM algorithms (often semiempirical), as is the case with AMBER (68, 106, 107), CHARMM (108). A somewhat less efficient, but more flexible approach is provided by the "pipe" or "interface" codes, capable of interfacing existing and well-tested QM and MM software by means of light procedures for preparing the inputs for the chosen QM and MM programs, launching, and then combining the output of the employed codes. This approach is used in the COBRAMM (109-111), CHEMSHELL(112), COMQUM (113) and PUPIL (114) software, whose scope ranges from material science to drug discovery. QM/MM approaches have been successfully applied to the various phases of the drug design process (115-117), providing a better understanding of enzyme reaction mechanisms (118-122) and catalytic proficiency (109, 110). More recently, hybrid approaches have been integrated into computational protocols devised for docking (85, 123, 124) and computing the binding affinity of drugs (85, 125, 126), thus providing a useful tool for in silico screening of lead candidates (127-130). 7. METALLOENZYMES: A CHALLENGE FOR QM METHODS Metal ions interact with cellular components and their role is of paramount importance e.g., in assuring structural stability, inducing conformational rearrangements, and conferring proper functionality in biological catalysis (metalloenzymes). Metalloenzymes are widespread proteins, ubiquitous in all life kingdoms, being involved in various biosynthetic processes. α-Carbonic anhydrases (CAs), matrix metalloproteinases (MMP), and angiotensin-converting enzyme (ACE), among others, are clinically exploited targets in the treatment or prevention of a variety of diseases such as congestive heart failure, hypertension, glaucoma, epilepsy, and cancer (131, 132). Other members of the family, like metallo-β-lactamases (MBL), are promising targets for developing new antibacterial drugs capable of avoiding bacterial resistance insurgence (133). Heme-group-containing enzymes are an important subgroup of the metalloenzyme family; cytochromes (16, 17), among others, have been investigated in detail (18, 134-136) to elucidate the degradation pathway of both endogenous metabolites and drug molecules, with the aim of rationally improving the pharmacokinetics of lead compounds. QM and MM methods, as well as the recent QM/MM hybrid techniques, have been used to study several metalloenzymes to better understand their catalytic features (125, 137-141) or to help in the lead optimization phase of structure-based drug design (85, 116, 117, 119, 126, 142-144). The investigation of metalloenzymes is a major challenge for computational methods because many metals can easily undergo spin state changes, allowing for different ligand arrangements. The polarizability in metal complexes is barely addressed by even the most sophisticated MM force fields. Several QM methods can be used to investigate metals' peculiar behavior, but at an increased computational cost. For this reason, many theoretical and computational chemists continue to focus on developing new techniques capable of capturing the elusive behavior of metalloenzymes. For these techniques to be used in the field of drug development, particular care must be devoted to the general usability of the methods and to the cost/accuracy balance. 8. THE POTENTIAL ENERGY SURFACE The energy of a molecular system, computed at the QM, MM, or QM/MM level for a particular atomic arrangement, is often referred to as "single point energy". Since the absolute energy has little physical meaning, it is common practice to compare the energy of two or more atomic configurations, since energy differences are related to physical observables. The Potential Energy Surface (PES) accounts for the dependency of the energy of a system with respect to its atomic configuration; it collects the potential energy of all the possible atomic configurations of a given system. In the Cartesian coordinate system 3N coordinates are required to describe a system comprising N atoms, since each atom is described by 3 coordinates that identify its position with respect to an arbitrary point (origin of the coordinate system). The internal coordinate system can be obtained by choosing 3N-6 linearly independent coordinates (rotation and translation degrees of freedom are eliminated) that coincide with bond lengths, angles (plane angles) and dihedral angles (solid angles) between atoms (6, 8). The PES can be thus defined in terms of Cartesian or Internal coordinate systems. Only a few points on the PES landscape have a chemical and physical significance. In particular, the points with null first derivatives of the energy with respect to all the n coordinates are of great interest. Those characterized by all positive second derivatives can be either one of the many local minima or the unique global minimum of the energy, corresponding, respectively, to meta-stable nuclear configurations or to the most stable configuration. Points with null first derivatives and negative second derivatives with respect to k<n coordinates are denoted as saddle point of index k. They have a chemical significance only if k=1, in which case they correspond to "transition structures". An elementary reaction step is described as a transition from one equilibrium state (minimum) to a neighboring one via a single transition state. The reaction mechanism is given by the sequence of steps involved in a chemical process and corresponds on the PES to the Minimum Energy Path (MEP) connecting the two minima that represent reactants and products, respectively (Figure 2). The localization of critical points on the PES is the main goal of many computational procedures aimed at studying the stability or reactivity of a system. This is because minima and first order saddles represent, respectively, stable nuclear arrangement and transition structures. Since the PES complexity rapidly increases with the increasing number of coordinates, the search algorithm is crucial to locating critical points. Efficient algorithms for locating equilibrium and transition structures are now available in modern molecular software. These algorithms are based on the calculation of the first (gradient) and second derivatives (Hessian matrix) and allow researchers to perform a simultaneous optimization of the whole set of coordinates. In general, the search algorithm is iterative and the geometry is gradually modified till the desired critical point is obtained. The fundamental equations for computing the coordinate variations at each step of the search procedure can be derived by assuming a quadratic shape of the PES (6-8). Should the surface be a real quadratic surface, the desired critical point will be obtained in a single step. In most cases, however, the surface is far from being quadratic, so a sequence of motions on the surface (optimization step) is usually required to locate the critical point. Because the calculation of the Hessian matrix is computationally expensive, approximate forms of the Newton-Raphson equations (involving approximated Hessian matrices) are usually employed. These methods are often referred to as "quasi-Newton" methods. The "steepest descent" method is far from being accurate but it is very fast and can be efficiently used to decide the first moves on a non-quadratic region of the PES, far away from the critical point. Then, in the vicinity of the critical point, the search algorithm can be switched to the Newton-Raphson method or to a more accurate quasi-Newton scheme. Examples of quasi-Newton methods are the widely used BSGF (145-148) scheme (Broyden-Fletcher-Goldfarb-Shanno), the L-BFGS (149, 150) scheme (Limited memory-BFGS), and the PSB (151) scheme (Powell-Symmetric-Broyden). Extensive exploration of a system's configurational space is thus avoided by using an appropriate optimization algorithm, capable of locating critical points by computing the energy and gradient of a small number of atomic configurations. 9. THE FREE ENERGY SURFACE The potential energy does not take into account the temperature of the system, thus it should not be related directly to physical observables like kinetic and equilibrium constants. Kinetics and thermodynamics of physical and chemical processes are directly related to free energy differences. In particular, the Gibbs free energy is defined when both pressure and temperature are constant, as in most physical systems under standard experimental conditions. The Gibbs free energy G is defined as the sum of an enthalpic term H, accounting for the potential contributions, and an entropic term -TS (i.e. G=H-TS). Computing the Free Energy Surface (FES) for a given system is an extremely difficult task, and several strategies based on either MD or Monte-Carlo techniques have been devised to compute free energy differences between states or free energy changes associated with processes. In all cases, the time required to explore and characterize the relevant portion of the FES is orders of magnitude greater than that required for PES exploration. This is because a large number of energy and gradient evaluations are required for statistical reasons. MD-based methods coupled to different levels of theory (e.g. HF, DFT, etc.) and different schemes (e.g. QM/MM) are valuable solutions for exploring the FES of a given system. Nevertheless, the time required to spontaneously explore the relevant portions of the FES is still out of reach for large systems of biological and pharmaceutical relevance, and for current software and hardware performance. Enhanced sampling techniques are thus required to speed up calculations. Some of these procedures require the definition of one of more reaction coordinates so as to restrict the exploration of the phase space to a smaller, more interesting region chosen as representative of the process in question. Steered (SMD) and targeted (TMD) molecular dynamics (152, 153), umbrella sampling (US) (154-156) have been successfully used in the past to study both protein conformational changes and enzymatic reactions (6). More recent techniques like conformational flooding (CF) (157) and metadynamics (MMD) (158-160) do not require prior knowledge of the reaction coordinate components, allowing an unbiased investigation of the FES. In particular, the use of metadynamics schemes within the framework of QM/MM simulations have recently proven to be an effective protocol for investigating the reaction mechanism of potential drug target enzymes (161-163). Within this framework, hybrid QM/MM (104, 105, 164) MD simulations based on the Car-Parrinello (CP) (165, 166) treatment of the electronic degrees of freedom has emerged as an efficient method for characterizing the reaction mechanism of several enzymes of pharmaceutical relevance (137, 138, 167-169). The CP scheme, as implemented in the CPMD code (165, 170), was originally developed for the study of condensed matter systems. In the past decade, it has frequently been used to treat isolated systems of biological relevance (171). In the DFT framework, the CP method allows researchers to describe the dynamics of the ground state, in which the electrons adiabatically follow the nuclear degrees of freedom. The advantage of this method over Born-Oppenheimer (BO) MD is that the wavefunction does not need to be optimized at every MD step, providing a considerable computational gain (for further details on the CP and BO methods, see the review by Marx and Hutter) (172). Recently, efficient optimization schemes (173) have been developed, which quickly converge the wavefunction of the system. This allows the use of BO-MD as alternative way of studying enzymatic catalysis in a QM/MM setup (e.g. see the implementation of BOMD in CP2K (166, 174) and the recent studies published using this scheme (161)). 10. FUTURE PERSPECTIVES In recent decades, computational approaches have played an increasingly significant role in the field of drug development thanks to their improved reliability. In particular, QM and QM/MM-based protocols are today routinely used to investigate the reaction mechanism and catalytic proficiency of potential drug targets. CPU-demanding simulations are thus used to obtain structural and electronic information about reaction intermediates and transition states so that researchers can design analogue molecules that act as inhibitors. The ongoing effort of many research groups in academia and industry continues to improve the reliability and cost/efficiency ratio of QM-based approaches. QM/MM potentials have been designed to investigate biological systems without introducing unphysical truncations near the active site; thus, QM/MM hybrid schemes offer noticeable improvements over simple QM treatment of small portions of the system, and have been used in recent years to study a large variety of biological systems. Researchers have thus been forced to test and improve the accuracy of QM methods and often to develop new theoretical approaches to cope with the diversity of chemical reactions and substrates encountered in biology. The need for a better description of metalloenzymes required accurate tests of old DFT functionals (28) and led to the development of novel strategies for studying transition metal complexes (175). Additionally, the increasing importance attributed to weak interactions in both complex stability and reactive processes is driving research into new QM methods capable of correctly describing noncovalent interactions at a reasonable computational cost.(43) Nonetheless, the high demand in terms of simulation time and hardware requirements remains a limiting factor for QM methods, especially in the field of drug design. Thus, many researchers have focused on developing new theories and better approximations to improve performance without impairing the accuracy of results. The RI approximation, (176) for example, is a recent breakthrough, which allows DFT and even MP2 to be efficiently used to study biological systems. While theoreticians develop more accurate methods and computer scientists implement these in efficient algorithms, developments in hardware are poised to herald a new era for QM-based methods. Since many available QM applications are so CPU-demanding (central processing unit), several strategies have been devised to efficiently share the load of a single calculation over a large number of CPUs. Algorithms were written to work in parallel on many processor units and wiring systems, which had been devised to allow for efficient communication between different nodes of a cluster. More recently, graphical processing units (GPU), formerly used only for graphical applications in professional design and video-gaming, have started to be used for general computing applications, benefiting from the evident advantages in terms of parallelization and hardware cost. Thus, QM algorithms have recently been rewritten using new programming languages designed for new-generation GPUs. Two-electron integral evaluation (177) and SCF procedure (178), both at the heart of many ab initio QM methods, have been shown to be faster on GPUs than on state-of-the-art CPUs by one to two orders of magnitude, opening the way for GPU-specific implementation of HF (179), DFT (180) and even RI-MP2 (181). These promising results are expected to steadily improve with the continuous progress in the GPU-based hardware and software. Soon, they will significantly affect the way we use QM-based methods to investigate biological systems and to rationally design more effective drugs. 11. REFERENCES 1. Markman, M. and Peereboom, D. M.: From serendipity to design: the evolution of drug development in oncology. Cleve Clin J Med., 64(3), 155-63 (1997) 51. Stewart, J. J. P.: Optimization of Parameters for Semi-Empirical Methods I-Method. J. Comp. Chem, 10, 209-220 (1989) 101. Pu, J., Gao, J. and Truhlar, D. G.: Generalized Hybrid Orbital (GHO) Method for Combining Ab Initio Hartree-Fock Wave Functions with Molecular Mechanics. J. Phys. Chem. A, 108(4), 632-650 (2004) 151. Powell, M. J. D.: A new algorithm for unconstrained optimization. Academic Press, New York (1970) Key Words: Quantum Chemistry, Molecular Mechanics, QM/MM, Force Field, Drug Design, Enzyme Catalysis, DFT, CPMD, MP2, Review Send correspondence to: Matteo Dal Peraro, EPFL SV IBI1 UPDALPE, AAB 017, Station 15, CH-1015, Lausanne, Switzerland, Tel: 41216931861, Fax: 41216939665, E-mail:matteo.dalperaro@epfl.ch |