Predict Bankruptcy In UK With Data Mining Finance Essay

The last old ages there is an increasing involvement in using Data Mining techniques in fiscal and economical field. The bulk of these methods have been applied in developing prognostic theoretical accounts. The current research focuses on two Data Mining methods, Decision Trees and Tree Augmented NaA?ve Bayes. A sample of failed and non-failed fabrication houses in UK from the period 2004 2007 will be selected in order to foretell concern failure utilizing the company ‘s fiscal indexs. A restriction of the initial variables has been conducted through a non parametric technique Mann-Whitney-Wilcoxon. The chief intent of the paper is to happen the ability of the two specific recent Data Mining techniques to do prognosiss and the chief fiscal ratios that are exported and observe a possible corporate failure. The two methods have similar truth public presentation, but the Decision tree is preferred due to its transparence and simpleness.

Introduction

After the bankruptcy of WorldCom and Enron, investors have become sensitive and cautious of hazard, particularly hazard of corporate failure ( Aziz & A ; Dar, 2006 ) . A possible concern failure can do fiscal amendss to the investors, the creditors, the province and to the labour organisation, which besides means that has a terrible impact on the society. Business failure is really a house that ceases its concern operations because of its inability to do adequate net income ( Ahn, Cho, & A ; Kim, 2000 ) . The ground beyond this state of affairs may be the unequal direction, hapless selling scheme and the inability of following the development and the competition. It can besides be caused by the Domino consequence that an economical or a fiscal crisis can arouse. After the fiscal crisis of 2007 and the bankruptcy of big organisation as Lehman Brothers, many national economic systems have been affected and have been led to a recession. Business failure can arouse instability to a whole market and a terrible societal and economical cost to a state. So, it is indispensable these possible concern failures to be recognized early plenty in order these state of affairss to be avoided.

From the first publication of a theoretical account that predicts the possible concern failure. Beaver ( 1966 ) until today there have legion attempts in order corporate bankruptcy to be predicted. There are merely few surveies on the significance of non-financial information, like the sudden alteration of the CEO, or the fiscal director or the hearer in the twelvemonth before the bankruptcy ( Shuai and Li, 2005 ) . Furthermore, in their research Dimitras et Al. ( 1996 ) stressed this great value of non-financial factors for concern failure prognosis. These factors include direction, forces, merchandises, equipment. But, on the other manus a great figure of surveies have been conducted take into consideration the fiscal indexs, which come from the Financial Statement of a house. Fiscal ratios can be classified to several classs in order to gauge and mensurate the concern public presentation. Harmonizing to Huang et Al. ( 2008 ) , fiscal ratios are of import tools in anticipation of concern failures, so it is a common phenomenon are to develop the prediction methods. In this research, the prognosis of possible bankruptcy will be done with the aid of Data Mining techniques utilizing fiscal ratios.

Data Mining

One of the chief characteristics of the modern clip is the immense sum of informations and the great demand of informations storage. Electronic Devices records daily big sums of informations and hive away them in storage agencies which are going larger in capacity and cheaper in cost. For this ground it is required specific methods to be developed in order to change over natural informations into utile information. The handiness of big measures of informations and the demand utile information to be recovered from these are the chief grounds for making the industry of computer-called Data Mining ( Data Mining – Diabetes mellitus ) .

“ Data excavation is the set of methods and techniques for researching and analysing informations sets, in an automatic or semi-automatic manner, in order to happen among these informations certain unknown or concealed regulations, associations or inclinations ; particular systems end product the necessities of the utile information while cut downing the measure of informations. Briefly, information excavation is the art of pull outing information – that is, knowledge – from informations. ” ( Tuffery, 2011 ) . The Data Mining includes constructs, techniques and methods from statistics, unreal intelligence, even biological science and neuroscience.

There are two different signifiers in Data Mining, confirmation and find. When the information is extracted, the being of forms and tendencies is tested in order to be testified if the chief inquiry of the research can be answered. That is called confirmation method. In find, the informations excavation can be descriptive and prognostic. The descriptive techniques aim to unwrap information that there is someplace in a big sum of informations. The prognostic techniques intend to pull out new information based on the existed information. In the instance of anticipation there are two chief signifiers, arrested development and categorization. Regression is a method which finds relationships between input and end product forms, where the values are uninterrupted or existent value. On the other manus categorization is a method, that a specific theoretical account or classifier is created in order to foretell categorical labels.A The major informations excavation tools in categorization as Rokach et Al. , ( 2007 ) mentioned, are Nervous Networks, Bayesian Networks, Decision trees, Support Vector machines and Instance based.

.1 ROKACH, L. and MAIMON, 2007, pp. 4

Financial and Accounting are Fieldss that the past old ages there is an increasing application of Data Mining techniques. Banks, Brokerage governments, large accounting houses and other fiscal sector entities maintain systemic informations files. Some of applications of informations excavation in finance are the anticipation of stock monetary values, the rating of creditworthiness of one company, the anticipation of a corporate failure or the anticipation of a possible fraud in fiscal statements. More specifically, the application of informations mining methods in the Audit or the accounting field is widespread and it is non merely found in academic degree by research workers. Professional organisations have mentioned their significance. The American Institute of Certified Public Accountants ( AICPA, 1999 ) defines Data Mining as one of the top 10 engineerings of the hereafter. In this work the Data Mining technique will be applied for the corporate anticipation in UK.

At chapter 2, there is a literature that describes the historical and scientific route of the anticipation of concern failure. At chapter 3 there will be a short description on how the two theoretical accounts and the proof methods that it will be used, really works and on which theoretical foundations based. At chapter 4, it will be described the whole process of how the existent research is applied. It will be illustrated every measure of the whole procedure until the Data Mining tool be applied and the concluding consequences are exported. At chapter 5 the concluding consequences are discussed and any farther enlargements of the surveies are suggested.

Literature Review

Data Mining methods have applied the recent old ages in the prognosis of concern failure. In the research of Kumar et Al. ( 2007 ) the anticipation of corporate failure was classified to two chief classs, statistical and intelligent methods. The statistical methods include techniques such as additive discriminant analysis, multivariate discriminate analysis, quadratic discriminant analysis, logistic arrested development and factor analysis. On the other manus, at intelligent techniques belong the nervous web architectures ( multi-layer perceptual experience, probabilistic nervous webs, larning vector quantisation, and cascade correlativity nervous web ) , case-based logical thinking, evolutionary attacks, unsmooth sets, determination trees, soft computer science ( intercrossed intelligent systems ) , operational research techniques ( additive scheduling, informations enclosure analysis, quadratic scheduling ) , fuzzed logic techniques, support vector machine and Bayes web.

An extended literature reappraisal will follow and will uncover that there are many different researches, which deal with different industry country and analyze different period of clip. The ground beyond the being of this figure of documents the past old ages is that harmonizing to Dimitras et Al. ( 1996 ) a anticipation theoretical account is non possible and capable to foretell corporate failures across different types of companies. Additionally, they claimed that the utility of a anticipation theoretical account is limited and unequal in different states, sectors and periods of clip. So there is a great demand of making each period of clip an extended study of patterning different type of houses. They besides note that anticipation methods normally comprise three parts: sampling and informations aggregation, method choice and specification of variables to develop a prognostic theoretical account, and theoretical account proof. This research will besides establish on these three stairss.

The first theoretical account that it was developed and based on the univariate analysis was Beaver ( 1966 ) . In this research Beaver applied t-test in order to gauge each single fiscal index. In this analysis two groups of companies were selected, the healthy and the bankrupted. The categorization of the groups were made by a comparing between the value of a specific ratio and the mention value ( cut-off mark ) of the same mark. The optimum cut-off point was identified where the per centum of misclassifications ( neglecting or non-failing ) was minimized. Beaver concluded that the ‘cash flow to entire debt ‘ ratio was most important in foretelling failure. In fact, the job of foretelling corporate failure is multidimensional and one individual ratio can explicate and foretell the diversified factors that can impact a bankruptcy. Besides, if merely ratio could do the anticipation, this ratio could be manipulated and the truth and dependability of the theoretical account would be affected.

In 1968 Altman, mentioned that the bankruptcy is a long term procedure and it is executable to acquire earlier presentment for the extroverted bankruptcy from the Financial Statement of each company. He has applied a Multiple Discriminant Analysis ( MDA ) at the fiscal ratios of the fabrication industry in USA he suggested the Z- mark. With this method Altman tried to happen the additive map of the specific fiscal indexs that discriminates belly-up and non-bankrupt companies. There was a choice of 33 bankrupted and 33 non-bankrupted fabrication houses during the old ages 1946 to 1965. The mated sample method that he used was the industry and plus size as standards. The 22 fiscal ratios have selected by 5 classs: liquidness, profitableness, purchase, solvency, and activity. A Z- mark have exported with 5 fiscal ratios. If a company had a Z Score greater than 2.99, so it was classified as non-bankrupt. If it was below 1.81 was classified as insolvent. The country between 1.81 and 2.99 was called “ grey country ” where misclassifications were observed. ( Altman, 1968 ) . The MDA method that Altman had suggested, classified the companies with 95 % success 1 twelvemonth prior to bankruptcy. There were farther researches that were based on the multivariate analysis and they expanded this method, building new theoretical accounts like the theoretical account of Springate ( 1978 ) and Bathory ( 1987 ) . Some of the disadvantages of the MDA are that the independent variables follow multivariate normal distribution and that the variables between the two groups of companies – insolvent and non bankrupt- have different agencies equal variance-covariance matrixes. Besides the method treated companies from different industrial Fieldss as the same. Kamath et Al ( 2006 ) highlight that when a information is selected from different sectors, so there is a chance the heterogeneousness of the observations to be ignored and therefore bias affects the appraisal of theoretical account parametric quantities. Altman himself constructed a different Z-model, utilizing houses from the railway sector ( Altman, 1973 ) . Other research workers found besides some drawbacks in that peculiar method. Grice et Al. ( 2001 ) stated that even if MDA is still a powerful tool for anticipation, it faces troubles when there is a usage of non industrial companies and when the appraisal period is different from the foretelling period. Pomp and Bilderbeek ( 2005 ) used MDA to a sample of little companies and they faced troubles on make the prediction of corporate failure, because they could happen any important fiscal ratio.

In 1980, the Multiple Logistic Regression was used for the first clip for the anticipation of corporate bankruptcy by Ohlson James. In his research he developed a theoretical account that it is called “ Ohlson Logit theoretical account ” and he gave besides a mark with the name “ O-score ” . The research concluded to 9 variables. He developed besides three theoretical accounts that had the ability of foretelling bankruptcy one, two and three old ages prior corporate failure severally. The Ohlson theoretical account calculates the chance of bankruptcy in contrast with the Altman theoretical account that computes a mark, which classifies which company will be bankrupted or non.

In 1984, Zmijewski built up a leaden probit theoretical account, which is a more general application than Altman ‘s method. Altman theoretical account was developed for fabricating industries and it is non suited for other sectors, on contrast with Zmijewski theoretical accounts, which are more generalised.

All the above statistical methods were used widely from several researches. But, at the beginning of the 90 ‘s these techniques began to be replaced steadily by other non- parametric methods as Data Mining.

Odom and Sharda ( 1990 ) studied and built up a nervous web theoretical account for possible concern failure and compared the consequences and the public presentation with that of Discriminant Analysis. They concluded that nervous webs might be used in calculating corporate failure.

In 1997, Friedman et Al. introduced an betterment of NaA?ve Bayes technique the Tree Augmented NaA?ve ( TAN ) Bayes, which outperforms naif Bayes. But, at the same time, sustains the simpleness of the calculations and hardiness that is the chief feature of naif Bayes. In that research a comparing between C4.5 algorithm, Bayes NaA?ve and TAN Bayes is occurred and they concluded that TAN is competitory and can exemplify of import betterments in many instances.

Joos et Al. ( 1998 ) in his research make a comparing between determination tree and logit analysis theoretical account in a recognition categorization environment from an extended database of one of the largest Belgian Bankss. The research concluded that the logit theoretical accounts were more consistent in a recognition determination procedure, but on the reverse for a qualitative and short strategy informations, determination tree had better categorization truth.

Dimitras et Al. ( 1998 ) applied a method of Data Mining for foretelling corporate failure, the Rough Sets. In that paper it was tried to infix besides a human judgement for the prediction. The variable choice was made by an executive director of a Grecian bank. This specific method gave 54 reducts and the director selected one. From this reduct the determination regulations was exported. There was a proof method and after that the truth rates were compared with Discriminant Analysis ( DA ) and Logit Analysis. The comparing showed that the theoretical account of the Rough Set achieved high preciseness rate than the others methods, but it presented large difference in the type mistake I and II.

Lin and McClean ( 2001 ) in their research, their intent was to find which of the discriminant analysis, determination trees, nervous webs and logistic arrested development is the best method of foretelling. At the get downing they selected 37 initial variables. At these initial variables they applied 3 different methods, human judgement, Analysis Of Variance and Factor Analysis in order to analyze which method of choosing variables give the best and more accurate consequences. They found that ANOVA performs better than the other 3 methods in all of the classifiers, except discriminant analysis. Finally, they concluded that determination trees and nervous webs had the best public presentation 88 % each.

Sarkar and Siriram ( 2001 ) developed Bayesian Network ( BN ) theoretical accounts to assist human hearers in measuring bank failures. Their NaA?ve Bayesian Network and composite property BN ‘s public presentation in categorization truth was comparable to DT algorithm C4.5. They underlined that the anticipation power of BN additions when recent fiscal indexs are used in the theoretical accounts.

Kotsiantis et Al. ( 2005 ) studied 50 failed and 100 “ healthy ” Grecian companies from the period 2003-2004. They took some representative algorithm from five different Data Mining Tools. More specifically, they selected NaA?ve Bayes ( BN ) algorithm from Bayes Networks, Local Decision Stump ( DS ) algorithm from case based algorithm, RIPPER algorithm from the rule-learning techniques, C4.5 from the determination trees, RBF algorithm from nervous webs and Sequential Minimal Optimization ( SMO ) algorithm from Support Vector Machine. They run 10-cross proof for each algorithm and they found that 3 old ages prior bankruptcy, the BN, RBF and Local DS algorithm had the better and public presentation with 68 % truth.

Sun L. et Al. ( 2007 ) developed a Bayes NaA?ve method which is found to execute based on a tenfold proof analysis with accuracy 81 % . They besides examine if the patterning uninterrupted variables with uninterrupted distributions alternatively of discrediting them can better the naA?ve Bayes theoretical account ‘s public presentation. They found that this does non go on. One justification that they gave was that uninterrupted distributions tested by this survey do non stand for good the implicit in distributions of empirical informations.

Wei-Wen Wu ( 2010 ) used 163 companies from Taiwan companies from a broad assortment of economical sectors except of fiscal and banking sector. In this research 18 categorization algorithms have compared, among them C4.5, NaA?ve Bayes, and Tree Augmented Baive ( TAN ) Bayes. The chief involvement of the research is the fact that the current failure anticipation theoretical accounts intend to follow the technique of fiting up bankrupted and non-bankrupted companies. The pair-match method leads to farther complications. The peculiar paper proposes a method which straight explores the characteristics of bankrupted companies instead than researching braces of failed and non-failed houses.

Suzaida et Al. ( 2012 ) applied a NaA?ve Bayesian theoretical account utilizing a proposed heuristic method in order to foretell bankruptcy at Malayan companies. The public presentation of the theoretical account reached at 91, 70 % . In this research is non taken topographic point any proof method and it is non mentioned how many old ages before the bankruptcy this theoretical account tries to foretell.

Among the different documents that deal with the prediction of concern failure, some of them using different proof methods and some they do non use any. Kirkos ( 2007 ) studied relevant documents and found that 14 uses the trial proof, in which- the whole sample is split to two parts and the 1 is used for preparation and other as a validate set. Six documents use transverse proof, which besides will be applied at this work and 3 of them do non use any.

Overall we could advert that the anticipation of corporate failure is a graphic research survey and the application of new Data Mining Technique and theoretical accounts that use methods from computing machine scientific discipline have renewed the involvement for this sort of researches. The research of Huang et Al. ( 2008 ) , illustrated that the bulk of earlier surveies that deal with the anticipation of corporate bankruptcy used statistical methods such as multiple discriminant analysis, arrested development analysis, and additive discriminant analysis. Merely late – the survey does non clear up the exact clip of “ late ” – it has begun the application of unreal intelligence technique. Harmonizing besides to that paper, the unreal intelligence and machine acquisition theoretical accounts outperform traditional statistical theoretical accounts.

In this research two of Datas Mining technique will be tested, Decision trees and a particular algorithm of Bayes NaA?ve, Tree Augmented NaA?ve ( TAN ) Bayes.

Methodology

Decision Trees

A determination tree consists of nodes matching to some characteristic of the preparation set each and divided into the followers:

1. Root: The initial node that there is at the top of the tree and divides the preparation set to 2 or more subgroups.

2. Internal node: Intermediate nodes that divide each subset of the bomber tree into smaller subsets.

3. Leaf: It is the terminal node that represents a category from the distinct set of all preparation categories.

All nodes except the foliages have surpassing “ borders ” that are the footing of the separation of informations. The footing that this division is taken topographic point is called split standards.

An illustration of a determination tree can visualise is found in the undermentioned figure.

3. Decision Tree

At this specific research the determination tree begins the selective sample with assorted variables ( the fiscal indexs ) and there is a categorization of whether or non it went belly-up. At each internal node, the tree splits the group of samples based on an attribute value.

The C4.5 algorithm ( Quinlan, 1993 ) is one of the most widespread one and really is an enlargement of an older algorithm ID3 of Quinlan ( 1986 ) . It is based upon the method of divide- conquer ( Hunt et al. , 1966 ) for the development of the determination trees.

The C4.5 algorithm when it reaches an internal node, take the optimum split, until no farther splits are possible. This specific algorithm uses a quantitative standard for the split, the Information Gain that is related to a construct of natural philosophies, the information.

See a preparation informations set T that contains specific sum of pre – sorted informations. In each node of the determination tree, the algorithm chooses a feature, which is called property. Harmonizing to this property, the preparation set T splits to subset T1, T2, aˆ¦ , TN, which contains objects that belongs to one category. The standard for this split is called Gain Ratio. The Gain Ratio standard is similar to Derive standard that it is used by the ID3 algorithm ( Quinlan, 1986 ) , the predecessor of C4.5. The lone difference is that C4.5 has an excess measure at the terminal. The algorithm attempts to divide the preparation set into n subsets of one property of the preparation set. The optimum property is selected by the Gain Criterion. At the get downing the algorithm hunt at the T and calculates for every possible value qz ( z = 1, 2, n ) of every property Q, the figure of positive, negative and entire visual aspect. It really calculates the objects that have the value qz of the property Q. Furthermore, it calculates the information of the preparation set T with the undermentioned expression:

Info ( T ) = –

Where |T| is the figure of topics that belongs to the preparation set T, freq ( Ch, T ) is the figure of objects that belongs to the category Ch and degree Fahrenheit is the figure of categories. The following measure is the expected information is computed by the leaden amount over the subsets, as

Info ( T ) = –

Finally, if the preparation set T is split by the property Q, the Gain ( Q ) = Info ( T ) – Infoq ( Q ) is the Information Gain. This process is applied repeatedly for every split of T and the Gain Criterion selects the property with the largest information Addition. But the Gain Criterion has a truly strong prejudice in favour of trials with many results ( Quinlan, 1993 ) that ‘s the ground that a farther measure is added at the C4.5 algorithm. The Gain Ratio Criterion tries to do a sort o standardization of the consequences of Gain Information. So, there are to computations. First,

SplitInfo ( Q ) = – and after that, there is

Gain Ratio ( Q ) = ,

where the Gain Ratio ( Q ) is the Gain ( Q ) that has been normalized.

If the split is near-trivial, the split of the information will be little and the specific Gain Ratio will non be stable. For that ground, the algorithm selects the property with the largest normalized Information Gain, among the properties that has at least the mean Information Gain. This scheme is repeated and is called breakdown and as a consequence a determination tree is grown. This technique of partitioning divides the set of preparation instances until no farther trial provides any betterment. Often a really complicated and sophisticated tree is grown, which “ over fits the informations ” by making more construction than is expected from the set of preparation instances. For that ground, the algorithm C4.5 includes a particular method that is called sniping. There are two different signifiers of pruning, the prepruning and postpruning ( Han et al. , 2006 ) . The first method prepruning related to be decided by the research worker during the tree-growing procedure when the tree should halt edifice bomber trees. This is offers the advantage of non bring forthing bomber trees and which after the completion of the procedure the research worker will throw off. The 2nd method, which is the postpruning, cuts sub trees from a complete grown tree.

The C4.5 algorithm uses a specific technique of sniping a tree, which calculates appraisal about the mistake base on the preparation informations itself ( Witten, 2005 ) . For N samples that are used for preparation, the determination tree categorizes E samples as non right. So the figure of the mistakes will be besides E. Even q the true chance of mistake at the node and the N samples are come by a Bernoulli procedure with parametric quantity Q, of which E are the mistakes. Therefore, it is indispensable to cognize the Q, which is really the true mistake rate, in order to reason and to analyze the class of truth that the new pruned tree will hold.

From the two figures E and N, Q is estimated. But because E and N do non come from an independent trial set, may be they give an optimistic consequences of the true mistake rate. For that ground C4.5 uses a more pessimistic estimation. For a assurance degree Celsius ( c=25 % by the specific algorithm ) , assurance bounds omega are calculated

Pr [

Where f = E/N is the ascertained mistake rate and Q is the true mistake rate. After that, the upper assurance bound omega is used in order to gauge the mistake rate vitamin E

vitamin E =

So, now the ( pruned ) tree with the lowest mistake rate vitamin E will be selected.

Decision trees ( DT ) has several advantages. In the contrary to other statistical methods, the DT does non make arbitrary premises about the one-dimensionality of the dealingss between the input and end product variables or the independence of the input variables. Besides, they are non- parametric theoretical accounts, which mean that they handle non normal distributions. The representation of cognition is besides an apprehensible attack and is easy to do decisions about the how the concluding consequences have come from. Specifically, in anticipation corporate failure the determination tree represent which variable – fiscal ratio uses in order to do the anticipation. Other unreal intelligence methods do non supply this characteristic. For illustration nervous webs act more like a “ black box ” ( Han et al. 2006 ) . The manner that the nervous web connects and the different leaden links that it uses, make it hard for human to construe. Furthermore, the determination trees can pull off losing values, which is – harmonizing to Witten et Al. ( 2005 ) – an endemic job in existent universe datasets and particularly in the instance of fiscal ratios.

On the other manus, DT has besides some disadvantages. First, they are really sensitive to alterations in the preparation samples, which can take to different tree. Furthermore, several algorithms Decision trees, as ID3 or C4.5, require the installing of the full preparation sample to the chief memory of the computing machine, which causes jobs in managing highly big samples. Besides, many DT tools require anterior chances of non bankrupted and bankrupted companies as inputs. These anterior chances are normally randomly estimated, which adds an component of impreciseness to the DT.

Bayes NaA?ve has been besides selected and it is mentioned as one of the top 10 algorithms in informations excavation ( Xindong Wu, 2007 ) . The naif Bayes theoretical account has as advantages the simpleness and the hardiness. It is really easy to build and there is non a demand for complicated iterative parametric quantity appraisal strategies, which helps to be applied to big sum of informations sets. Bayes NaA?ve can pull off quantitative, distinct informations and losing value. On the contrary the strongest disadvantage is that it assumes independency of the variables. Particularly in the specific instance of fiscal ratio there is a dependence among the variables

Bayes Network

The Bayesian Networks are powerful tools for cognition representation and for doing decision under uncertainness. At the beginning this specific method did non considered as a categorization tool. But subsequently when the Bayes NaA?ve – a simplified version of Bayesian networks- has discovered, it was proved that the Bayes Networks had a really dependable and efficient possible categorization, comparable with those of nervous webs and determination trees.

The Bayesian Networks based their theoretical background on statistics and chances. They are coming from the theorem of Bayes, which calculates the posteriori conditional chance P ( H | X ) , viz. the possibility to verify the hypothesis H given the fact that X is true. Harmonizing to the Bayes theorem the chance P ( H | X ) is given by equation:

P ( H | X ) =

where P ( H ) is the anterior chance of the hypothesis, P ( X ) is the anterior chance of the fact X and P ( X | H ) is the posteriori chance the fact X will go on, with the certainty, given that hypothesis H will be true.

The NaA?ve Bayes is an application of Bayes theorem under some premises. The NaA?ve Bayes assumes that X is an observation of the sample and H the hypothesis that the observation belongs to the category. Specifically, X is considered as a vector of N of values aˆ‹aˆ‹x = ( , , … , ) . Besides it is supposed that there are thousand categories such that ( , , … , ) . So, harmonizing to the Bayes Theorem, the chance that the observation X belongs to the category is calculated by the undermentioned equation:

P ( | X ) =

In order to foretell the category of an unknown observation, the Bayes Classifier calculates chances for each category and gives the observation to the category with the highest chance. P ( X ) is really the same for all categories and so the P ( ) can be computed as the figure of observations that belong to category to the figure of all observation. The hard portion is the calculation of P ( X | ) which could be sophisticated and complicated in the instance that it is considered that there is a correlativity among the input variables. But in the instance that the variables are independent, the computation of contradiction P ( X | ) is easier and the respectively equation is:

P ( | X ) = ,

where is the value of dimension K of the vector Ten

The classifier, holding calculated the chances P ( | X ) for all categories, give the observation to the category with the highest chance. In the instance that it is assumed that the variables are independent, the naif Bayes classifier achieves higher rates of truth. ( Langley et al. 1992 ) .

So, the NaA?ve Bayes may be like that:

Figure 3. Bayes Naive

Where, x1 x2 x3 are the different independent variables.

In the instance of the application of existent informations, frequently there is the job of the being of correlativity among the variables. So there was an attempt of bettering the Bayes NaA?ve method by presenting new tool, such as Tree Augmented NaA?ve ( TAN ) Bayes, that allows the dependence among the variables ( Friedman et al. 1997 ) . The TAN Bayes method takes the initial NaA?ve Bayes technique and adds borders to it. The category property is the individual parent of each node of a NaA?ve Bayes web: Tan considers adding a 2nd parent to each node. ( Witten et al. 2005 ) . That means for illustration, that if there is an discharge from to, the two properties are non independent given the category. Alternatively the influence of on the category probabilities depends on the value of. The undermentioned figure is an illustration of how Tan Bayes looks like:

3. Tree Augmented Naive Bayes

So, the variables x1, x2, x4, x4, and x5 come from their parent category degree Celsius and there are several dependences between the variables. The same methodological analysis will be used for the anticipation of corporate failure. The initial category will be split to the selected variables and the will besides take into consideration the correlativity that the variables have with each other.

Validation Method

Harmonizing besides to Dimitras et Al. ( 1996 ) in a information excavation research after trying, the informations aggregation, the method choice and specification of variables in order to develop a prognostic theoretical account, a method of proof it is followed. A proof procedure is indispensable in order to measure up the theoretical account that is used. With proof the generalisation ability of our hypothesis, viz. the quality of its inductive prejudice, is checked. This is executable by spliting the preparation set in two different parts and the one portion is used for the preparation process, in which the determination tree is trained and the other as a trial set, which is tested for its public presentation and truth ( Alpaydin, 2004 ) . When the sample that is chosen for the information excavation method, contains big sum of informations, so this whole process occurs. In this specific research the sample includes 114 companies – 57 failed and 57 non- failed. So, it is non possible to cut farther the sample, because after that the preparation set could non supply us dependable consequences. So, there are several techniques that can confront this sort of job, like bootstrap method, jackknife process and k-cross proof. Weka machine provide the k-cross proof method for both Data Mining techniques, DT and TAN Bayes. Kohavi ( 1995 ) besides studied proof techniques and he had proved that 10-fold cross proof is the best theoretical account choice in Decision Tree and Bayes NaA?ve methods. In this method the sample is divided into 10 equal subsets. One subset used for proof while the staying 9 were combined to make the preparation sample. This procedure is repeated 10 times each clip utilizing a different subset for proof.

Empirical Design

Data Description

For the process of roll uping informations, FAME ( Forecasting Analysis and Modeling Environment ) A Database has been used. FAME Database categorizes the houses as “ active ” and “ inactive ” companies. Inactive companies themselves are classified as “ dissolved ” and “ in settlement ” . The two definitions harmonizing to FAME DATABASE are:

“ Dissolved ” : A The house does non longer exist as a legal entity.

“ In settlement ” : A Once a company goes into settlement there is non a return manner. It has ceased all its activities and the company ends up being dissolved. So the houses that belong in “ in settlement ” are mentioned as belly-up companies that have problems to run into their fiscal duties.

First, the period that it is examined is before fiscal crisis 2004 – 2007. The ground beyond this pick is that this period of clip is before the period of crisis, where the economic system of UK and the concern trust were in comparative high degrees. Furthermore, after several old ages of relevant research, it would be really interesting to be observed if it is still executable to do anticipations about corporate failure. Some instances as the company Enron, which had been bankrupted at 2001 because of fraud in its Financial Statement, indicate that there is sometimes a use in fiscal statements, which makes harder the capableness of anticipation. Manipulation of informations and fraudulent of Financial Statement is a terrible job that Kirkos et Al, ( 2007 ) attempt besides to foretell with the aid of Data Mining.

The premise twelvemonth of bankruptcy is determined by the clip t0. The variables that have been selected are in -3 old ages before the t0. The international literature follows different forms of clip. The original Altman Model ( 1968 ) refers besides that for the old ages -1 and -2 before t0, the corporate failure can be accurately predicted and the bulk of international literature examines from -3 to -5 old ages.

The fabrication industry has been selected for our research due to the significance that has in UK economic system and the scope of the size of this specific country. The fabrication companies have been selected through the UK Standard Industrial Classification of Economic Activities 2007 and they are from the subdivision C, division 10 to 33.

Harmonizing to relevant research ( Gentry et. all, 2002 ) , it had been proved that hard currency flow ratios are really of import for the anticipation of corporate failure. In order to include hard currency flow ratios to our research and better the public presentation of our theoretical accounts, standards in our hunt that refers to companies that have Net Cash Operating Activities is settled. Besides the research of Neophytou et Al. ( 2004 ) showed that the operating hard currency flow variables have non been selected in UK failure theoretical accounts. Finally, they have been selected 57 companies, that they have been bankrupted between 2004 and 2007.

A really of import portion of garnering the sample is the brace lucifer. With the 57 bankrupted companies there should be a lucifer of 57 active companies. Equivalent methods in brace lucifer ( 1 insolvent and 1 active ) follow besides other researches ( Neophytou et al. 2004 ) . The bulk of the researches – as Neophytou et Al. in his comparative survey state- use some of the undermentioned pair-match standards: the same twelvemonth, the same industry, the same plus size or they do non used any specific brace lucifer method. The current research will carry through the brace lucifer with three standards:

1. The same twelvemonth. It is important to include the fiscal ratios of the companies in the same twelvemonth, in order to avoid ratios be “ inflected ” by some macroeconomic factor such as rising prices.

2. The same industry. The fiscal ratios can distinguish from a sector to another. The houses for illustration in fiscal sector will hold different ratios from them in logistic sector.

3. The same plus. The entire plus has been selected as an attempt to fit companies to similar size, which is really of import because the big houses from the little can hold a great difference in their fiscal indexs.

Variable Choice

In the Financial Statement Analysis there are several ratios that analyze the construction, the public presentation and the effectivity of a company. In this thesis we use the undermentioned fiscal ratios:

Profitableness

Net income Margin % : ( Profit ( Loss ) before Tax/ Turnover ) *100

Operating Net income Margin % : ( Operating profit/ Gross saless ) *100

Operating Gross per employee: ( Operating profit/ Number of employees ) *100

Gross border % : ( Gross Profit/Turnover ) *100

EBIT border % : ( Operating Profit/ Turnover ) *100

Tax return on Capital Employed: ( Profit ( Loss ) before Tax/ Total Assets less Current Liability ) *100

Tax return on Stockholders Fundss % : ( Profit ( Loss ) before Tax / Shareholders ‘ Fundss ) *100

Stockholders ‘ Equity Ratio: Stockholders Funds Total Assetss

Gross saless per employee: Turnover/ figure of employees

Berry ratio: – ( ( Gross Profit + Other Operating Income pre Operating Net income ) / ( Other Operating Income pre Operating Profit + Exceptional Items pre Operating Profit ) )

Gross saless to cost: Turnover/ Cost of Gross saless

Liquid

Solvency ratio: ( Shareholders ‘ Funds/ Total Assets ) *100

Current ratio: – ( Current Assets/ Current Liabilities )

Liquidity Ratio: ( Current Assets- Internet touchable Assetss ( Liabilities. ) ) / Current Liabilitiess

Debtor Collection yearss: ( Trade Debtors/ Turnover ) *365

Creditors Payment yearss: ( Trade Creditors/ Turnover ) *365

Debtors Turnover: Turnover/ Trade Debtors

Stock Employee turnover: Turnover/ Stock & A ; Work in advancement

Operating Return On Asset ( OROA ) : ( Operating Profit/ Total Assetss

Stockholders liquidness ratio: – ( Shareholders ‘ Funds/ Long Term Liabilities )

Net Assets Employee turnover: Turnover/ Total Assets less Current Liabilitiess

Asset Cover: – ( Entire Assets/ Long Term Debt )

Stock Employee turnover: Turnover/ Stock & A ; Work in advancement

Leverage

Gearing % : A – ( ( Short Term Loans & A ; Overdrafts + Long Term Liabilities ) / Shareholders ‘ Fundss ) *100

Interest Screen: Net income ( Loss ) before Interest /- Interest Paid

Long-run debt to assets ratio % : ( Long Term Debt/ Total Assets ) *100

Cash Flow Ratios

Operating Cash Flow Ratio: Net Cash Operating Activities./ Current

Cash Flow To long term Debt Ratio % : Net Cash Operating Activities/ Long Term Debt.

First, there is non a specific method of choosing fiscal indexs from the initial sample of fiscal indexs. Every research selects a broad assortment of variables. Harmonizing to Belovary B. ( 2007 ) the figure of the variables does impact the concluding anticipation truth. For illustration, the theoretical account ( Jo et al. , 1997 ) that considered 57 variables gave 86 % truth and the theoretical account ( Appetiti, 1984 ) that considered 47 factors classified houses with 92 % truth. Therefore, as besides higher figure of factors does non vouch a higher prognostic ability. These fiscal ratios have been provided from the Fame Database and Kenzie W. ( 2010 ) and have been classified to 4 classs in order to be obtained the whole fiscal information of the companies. The chief fiscal ratios have provided by FAME DATABASE and specific fiscal ratios have been selected by bibliography, such as: Current Ratio and Gearing Ratio harmonizing to Quintana et Al. ( 2008 ) and Zeitun et Al. ( 2007 ) severally are considered to be prognostic variables of possible concern failure. The research of Altman, Haldeman & A ; Narayanan ( 1977 ) showed that the Interest Cover is one of the important variables for anticipation. Some other documents have found that the undermentioned ratios are important, such as Operating Return on Asset ( Kahya, 1997 ) , Operating Net income Margin and Liquidity Ratio ( Lacerda & A ; Moro, 2008 ) , Operating Cash Flow Ratio ( Aziz et al. ,1988 ) .

Data Settlement

Several methods have been used in order to separate which of the initial variables will be used in the anticipation theoretical account. There are researches that base on fiscal and human judgement and on statistical methods, like alleviation F mark ( Kotsiantis et al, 2005 ) , Factor Analysis ( Back, 1996 ) , Principal Component Analysis ( Min, J. , & A ; Lee, Y. 2005 ) . McClean ( 2001 ) has compared three different methods fiscal and human judgement, ANalysis Of Variance ( ANOVA ) and Factor Analysis. Harmonizing to McClean ( 2000 ) paper the best accurate method for choosing variables is ANOVA. ANOVA is a statistical tool that presents a statistical trial of whether or non the agencies of different groups are all equal. With the aid of ANOVA it will possible to be determined which of the variables are significantly different between the specific group of bankrupted and active companies. The initial purpose of this specific thesis is that the analysis of discrepancy will be tested in order to obtain the concluding variables, which will be infixing in our two prognostic theoretical accounts, Decision trees and Bayes NaA?ve. The chief ground that the researches use statistical methods is they intend to cut down the figure of prognostic variable and limit the phenomenon of multicollinearity and merely the most important fiscal indexs entry to both informations excavation methods.

At foremost the statistical package plan PASW STATISTICS 18 will be used. After the import of the information into the plan the sample is checked for losing values. In every topographic point of a losing value the space is filled by the figure 9999. The losing values is a really common job that the research workers. But theoretical accounts, determination trees and TAN Bayes can manage the losing value. One of the basic premises of ANOVA is that the sample must follow normal distribution. So, two normalcy trials are performed Kolmogorov- Smirnova and Shapiro-Wilk. Both trials illustrated a non-distribution. Specifically Shapiro- Wilk trial, that is appropriate for samples below 2000 observations, had as a consequence that from the 56 distributions ( 28 for bankrupted and 28 for active companies ) merely the 13 has a normal distribution. This is a job that many similar surveies have faced. In his cumulative research Dimitras et Al. ( 1996 ) stated the fact that this deficiency of a normal distribution tantrum to most fiscal ratios and that has been noticed by several writers. For that ground it has been decided the tantamount non-parametric trial to be run. Harmonizing besides to Chih-Hung Wu et Al ( 2006 ) the appropriate trial is Mann-Whitney-Wilcoxon ( MWW ) . MWW belongs to A non-parametricA trials and provides an appraisal if one of twoA samplesA of observations that are independent of each other, have a inclination to hold larger values than the other and if observations from both groups are actuallyA independent. Specifically, it calculates the amount of ranks for the larger of the two groups ( either hard-pressed houses or non-distressed houses ) .

p-value

p-value3

Net income Margin %

0

Cash Flow To long term Debt Ratio %

0,063

Stockholders ‘ Equity Ratio

0,632

Gross Margin %

0,192

Tax return on Stockholders Funds %

0,001

Berry ratio

0,004

Tax return on Capital Employed %

0

EBIT border %

0

Liquidity Ratio

0,459

Net Assetss Turnover

0,08

Gearing %

0,798

Interest Screen

0

Operating Net income Margin %

0

Stock Employee turnover

0,492

Operating Cash Flow Margin

0,075

Debtors Turnover

0,831

Long-run debt to assets ratio %

0,293

Debtor Collection yearss

0,831

Operating Gross per employee

0

Creditors Payment yearss

0,029

Operating Return On Asset ( OROA ) %

0

Current ratio

0,177

Gross saless per employee

0,119

Stockholders liquidness ratio

0,391

Gross saless to Cost Ratio

0,21

Solvency ratio %

0,722

Operating Cash Flow Ratio

0,005

Asset Cover

0,709

Table 4. Mann-Whitney-Wilcoxon trial

From the above tabular array, the fiscal indexs with below the significance degree ( a=0.05 ) are important for foretelling the corporate failure. That means that 11 variables will be imported to the informations excavation techniques, which will be:

Variables

FINANCIAL RATIOS

Variables

FINANCIAL RATIOS

V2

Net income Margin %

V8

Operating Cash Flow Ratio

V3

Tax return on Stockholders ‘ Fundss %

V9

Berry ratio

V4

Tax return on Capital Employed %

V10

EBIT border %

V5

Operating Net income Margin %

V11

Interest Screen

V6

Operating Gross per employee

V12

Creditors Payment yearss

V7

Operating Return On Asset ( OROA ) %

A

A

Table 4. Variables for the DT and TAN Bayes

The V1 variable is related to the position of the company. If the company belongs to the failed, so the value of the variable V1 will be 0 otherwise will be really 1. The sum of 11 variables is met besides in the bibliography, where Hongkyu et al. , ( 1996 ) reference, the limited variables after the initial statistical method are from 6 to 20.

DECISION TREES

From the Mann-Whitney-Wilcoxon trial, 11 fiscal ratios have been selected in order to be used by the determination tree. There several informations excavation packages as CART ( Salford Systems ) , DBMiner, Enterprise Miner ( SAS ) , SPSS. At this thesis the unfastened beginning WekaA ( Waikato Environment for Knowledge Analysis ) 3.6.7 will be applied, which is aA machine learningA package written inA Java, developed at the University of Waikato, A New Zealand. Weka machine is a really widespread tool with a plentifulness of different algorithms. It is easy to utilize and provides besides visual image of the DT and TAN Bayes.

The WEKA contains tools for preprocessing informations, categorization, arrested development and bunch. The chief manner to utilize WEKA is to implement a acquisition algorithm in a dataset and the analysis of the consequences for export information about the information. The WEKA to treat a information file that it expects will be format ARFF or in signifier of CSV. So, it is indispensable the.xls file with the important fiscal indexs from Mann-Whitney-Wilcoxon to be converted to.arff file signifier. The following measure is to find which algorithm will be chosen for the research. C4.5 is the chosen algorithm and j48 its execution. Quinlan ‘s algorithm has been tested in several surveies, as besides it is mentioned in the literature reappraisal and provides a clear and efficient determination tree.

First, because the sample is little we will utilize the whole sample as a preparation set. This means that the algorithm does non utilize any other set as preparation set and is trained on the whole sample. This leads to “ optimistic consequences ” . This happens because the DT has been learned from the same preparation informations and so any estimation based on that information will be optimistic. This step, puting the whole sample as a preparation set, may non be representative, but gives us the optimum public presentation of our theoretical account ( Witten et al. , 2006, pp 144-146 ) .

From the C4.5 algorithm are derived the undermentioned consequences:

Percentage of right categorization

Percentage of false categorization

Active

56/57 a‡? 98,16 %

1/57 a‡? 1.74

Bankrupted

22/57 a‡? 61,40 %

35/57 a‡? 38,60 %

Average

79,82 %

20,18 %

Table 4. Consequences DT – usage of the whole sample as preparation set

From the above tabular array it is concluded that the classifier categorizes 56 active companies as active and one active company as bankrupted. Besides, classifies 35 bankrupted as bankrupted and bankrupted 22 as active. So, the algorithm classifies 79.82 % of the companies right and 20.18 false.

Except of the truth of the theoretical account, really of import is besides the costs of misclassification mistakes, type I and type II. When the algorithm classifies a failed house as an active so, this called type mistake I and when categorizes a healthy as failed is type mistake II. In this sort of researches the cost of type I has more value than the type II ( Aziz et al. , 2006 ) . For illustration, type mistake I for a recognition establishment is a loss in instance of a bankruptcy. On the other manus, type II mistake is for a recognition establishment is a loss of net income. So, the type mistake I is less preferred than type mistake II

From this above tabular array type mistake I is 38, 60 % and type II 1, 75 % . Now, the 10- crease cross proof will be applied. The consequences from the 10 fold cross-validation are:

Percentage of right categorization

Percentage of false categorization

Active

45/57 a‡? 78,95 %

12/57 a‡? 21,05 %

Bankrupted

32/57 a‡? 56,14 %

25/57 a‡? 43,86 %

Average

67,55 %

32,45 %

Table 4. Consequences of DT – 10-fold cross proof

From this tabular array, we can see that the algorithm classifies 45 active houses right and 12 active as failed. Besides it categorizes 32 failed houses as failed and 25 failed as active. Which that it averages classifies right with truth 68, 55 % and false 32, 45 % . The type mistake I is 21, 05 % and the type mistake II 43, 06 % .

In Weka Machine there is an option, which can visualise the determination tree that has created. So the existent determination tree will be:

Figure 4.1 Visual image of DT

At this figure four variables are presented that have been selected from the algorithm in order to do the prediction. These variables are v7 = Operating Return On Asset ( OROA ) , V9 = Berry ratio, V3 = Return On Shareholder ‘s Fundss and V6 = Operating Revenue per Employee. First, the determination tree decides to divide the tree with the OROA. The houses that have OROA ratio more than -0.26, are classified as active. The remainder of them are categorized by the Berry Ratio and if they have this specific ratio above 0.83, so they classified as active. At the following measure we see once more the OROA ratio, but this clip is split at the values -9.62. If the value of the ratio is above – 9.62 so categorizes the houses as companies. If it is below -9.62 so the determination tree splits at -63.58 in another node with the variable Return on Shareholder ‘s Fundss. If the ratio of the Return on Shareholder ‘s is above -63, 58 so the algorithm classifies the houses as active. Otherwise it uses the Operating Revenue per Employee at the value of – 21.65 in order to measure up as bankrupted the houses that are below that point and as active the houses that are above.

So, this algorithm shows that these four fiscal indexs are important for the anticipation of concern failure. Operating Return on Assets ( OROA ) , which was found besides by Kahya, ( 1997 ) appeared twice in the determination trees. OROA really indicates whether the direction is bring forthing equal runing net incomes based on the assets of the company, viz. if the direction uses expeditiously all its assets in order to bring forth net income. The Berry ratio is the ratio of a concern ‘ gross income to operating costs. A Berry ratio that is above 1 show that the house makes net income above all variable disbursals and if it is below 1 that loses money. Return on Shareholder ‘s financess or Return on Equity, which is an tantamount term shows if a company is profitable and therefore has more net income available in order to pay the stockholders. The purpose of ROE, which besides found important by Andreica M. et Al. ( 2009 ) , is to bespeak if a house expeditiously uses the capital that receives from its stockholders to bring forth an investing return back to them. Operating Gross per employee indicates the productiveness and how expeditiously is the construction of a company in order every employee to bring forth more net income to the company.

By and large, the fiscal ratios have as characteristic that individually they can explicate merely a little portion of each company. They have to be combined with besides other fiscal ratios in order to show a more complete fiscal analysis of the company. And besides the ratios of the company is compared with the ratios of the industry that they belong in order to find if are good or non.

To sum up, after the choice of 28 initial variables and the restriction with the non parametric trial Mann-Whitney-Wilcoxon, the algorithm C4.5 has been applied and it gave 4 variables- fiscal ratios that can foretell bankruptcy with truth 67, 55 % with a 10-fold cross proof.

Tree Augmented Bayes NaA?ve

The same stairss of methodological analysis that have been applied at determination tree will be besides applied at the TAN Bayes. The fiscal ratios are imported with the same process to the Weka Machine and the appropriate algorithm of the specific algorithm TAN Bayes will run. First, the whole sample will be used once more as a preparation set in order to detect the respectively truth and to happen the “ optimum ” public presentation of our theoretical account. The consequences are the undermentioned:

Percentage of right categorization

Percentage of false categorization

Active

47/57 a‡? 82,46 %

10/57 a‡? 17,54 %

Bankrupted

40/57a‡? 70,18 %

17/57 a‡? 29,82 %

Average

76,32 %

23,68 %

Table 4.5 Consequences of TAN Bayes – usage of the whole sample as preparation set

From the tabular array, the truth of the algorithm is 76, 32 % and the Type Error I 17.54 % and Type Error II 29, 82 % . In comparing with the determination in instance of the usage sample as a preparation set, the TAN Bayes achieves less accuracy rate 76, 32 % than determination tree 79.82 % , type mistake I is 17.54 % compared with 1, 75 % and the type mistake II 29, 85 % and 38, 60 % severally. So, determination trees achieves better consequences in truth, but it has smaller type mistake I, which I more of import than type mistake II.

Now, the same algorithm runs with the 10-fold proof method in order to be compared with the determination tree. Both theoretical accounts have to be verified with the same proof method in order to be compared with each other. So, the 10- crease cross proof is applied and the consequences are the undermentioned:

Percentage of right categorization

Percentage of false categorization

Active

44/57 a‡? 77,19 %

13/57 a‡? 22,81 %

Bankrupted

34/57 a‡? 59,65 %

23/57 a‡? 40,35 %

Average

68,42 %

31,58 %

Table 4.6 Consequences of TAN Bayes- 10- crease cross proof

The truth of the theoretical account reaches at 68, 42 % , somewhat above from the determination tree. But, it has somewhat larger Type Error I. Overall, the TAN Bayes achieves a somewhat better public presentation, but the difference with the determination tree is little.

Figure 4.2 Visual image of TAN Bayes

The algorithm TAN Bayes uses all the variables in order to do the anticipation. The chief feature of this specific algorithm is that licenses correlativity and dependence among the variables. So, from the chief category comes the 11 different variable, which get affected each other. The chief disadvantage of the Bayes Networks classifiers is that they do non supply the weights of the prognostic variables. Specifically, it is non executable to detect at which per centum each variable “ aids ” to the anticipation. And, besides it is non executable the per centum that each variable affects the other, to be determined. For illustration, the variables v3, v5, v6, viz. Return on Shareholder ‘s Funds, Operating Profit Margin and Operating Revenue per employee have correlativities with the v7, Operating Tax return on Assets. This can be explained by the fact that the fiscal ratios v5, v6, v7 have as numerator the operating net income and the v3 net income before revenue enhancements. Similar correlativities are besides spotted at other ratios like v3 with v11 and v4, v10 with v5, v2 with v10 and v2 with v8 and v12.

RESULTS- DISCUSSION

In this work, it has tested if the prognosis of concern failure utilizing fiscal indexs is possible. A whole sample of non-failed and failed companied from the fabrication industry in UK has been selected utilizing the technique of brace lucifer. From FAME database 28 fiscal ratios have been chosen. In the sample, a non normal distribution was detected by the Shapiro-Wilk trial. Even the bulk of surveies consider that fiscal ratios follow normal distribution and use tantamount methods ( Dimitras et al. , 1996 ) . And here there is a contrast that this work tries besides to foreground. The usage of Data Mining methods have been done for several grounds and one of the advantages that these unreal intelligences techniques are that they are non parametric tools, so they manage non – normal distribution and this one of the grounds that these preferred in existent informations sets. But several surveies before the application of non parametric techniques apply parametric methods as ANOVA. This is a quite contrast of nearing existent informations sets as fiscal ratios. This survey found that the fiscal indexs follow not normal distribution and for that ground it has been decided to go from the major route of bulk of researches and to use alternatively of ANOVA the tantamount non- parametric method Mann-Whitney-Wilcoxon. From that method the 11 variables from the initial 29 have selected and been applied at two recent Data Mining techniques, Decision Trees and Tree Augmented NaA?ve Bayes. These two tools have been selected because of the simpleness, their truth and their hardiness. Furthermore, TAN Bayes is a technique that is an betterment of the classical NaA?ve Bayes and its truth in anticipation concern failure has been tested merely in few researches. It is the first clip that this technique is tested for UK companies with the restriction of variables from a non parametric trial. After that a 10-fold cross proof has been conducted for both theoretical accounts and the determination tree has reached 67, 55 % truth and the TAN Bayes 68, 42 % . But, the TAN Bayes had a somewhat larger Type Error I. So, it is hard to be considered that any of these tools dominates at each other. The overall truth of the two theoretical accounts is between the scopes of the consequences that other researches have found as besides it is mentioned in the literature reexamine The lone thing that it will be deserving to be highlighted is that DT may be are preferred for the research worker, because of the transparence that they offer. The DT provide us 4 fiscal ratios that there important for the anticipation such as Operating Return On Asset ( OROA ) , Berry ratio, Return On Shareholder ‘s Fundss and Operating Revenue per Employee. With the aid of these 4 variables the DT achieved about the same truth with the TAN Bayes

The usage of Data Mining in calculating concern failure is increasing. Several surveies have been published and several methods have been applied with reasonably good public presentation and truth. When a house is traveling bankrupted, so there are many grounds beyond that and non the ratios that can be exported from the Financial Statement. A possible enlargement of this research will be that besides some non fiscal ratio can be included to the survey, such as market ratio, the clients ‘ concentration or even the instruction degree of the employee. Another good country of analyzing the prognosis of concern failure is a comparing of informations mining technique between the period before the fiscal crisis of 2007 and after that. It would really utile for the theory of Financial and Accounting direction to happen the ratios, which can bespeak bankruptcy between periods that there is a fiscal and economical enlargement and a period of economical troubles.