Next Article in Journal
Control of a New Financial Risk Contagion Dynamic Model Based on Finite-Time Disturbance
Previous Article in Journal
How Do Transformers Model Physics? Investigating the Simple Harmonic Oscillator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inferring About the Average Value of Audit Errors from Sequential Ratio Tests

by
Grzegorz Sitek
1,† and
Mariusz Pleszczyński
2,*,†
1
Department of Statistics, Econometrics and Mathematics, University of Economics in Katowice, ul. 1 Maja 50, 40-287 Katowice, Poland
2
Department of Mathematics Applications and Methods for Artificial Intelligence, Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2024, 26(11), 998; https://doi.org/10.3390/e26110998
Submission received: 16 October 2024 / Revised: 18 November 2024 / Accepted: 18 November 2024 / Published: 20 November 2024
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
The book amounts are modeled as values of a random variable, represented by a mixture of distributions of both the correct and error-contaminated amounts. The mixing coefficient represents the proportion of items with non-zero error amounts. This study addresses the problem of determining the sample size needed for testing statistical hypotheses regarding mean accounting errors. The average sample size is estimated using the Sequential Probability Ratio Test (SPRT), applying the Monte Carlo method. Estimating average audit errors is a common challenge in economic research.

1. Introduction

Let X i represent the book amount of the ith item in an account, with the total population book amount denoted as X U = i = 1 N x i . At regular intervals, an auditor samples n line items from the account and to compare against their correct values. Therefore, let Y i denote the audited amount for the ith line item, and let τ i = X i Y i denote the error amount. Note that the total book amount is known to the auditor. The fundamental issue is constructing confidence limits for means or totals in finite populations where the underlying distribution is highly skewed and contains a substantial proportion of zero values. This situation is often encountered in statistical applications such as statistical auditing, reliability assessments, and insurance. The most distinctive feature of accounting data is the large proportion of line items without error, while an audit sample may not yield any nonzero error amounts. For data with mostly zero observations, classical interval estimation based on asymptotic normality is not reliable. In auditing practice, auditors are often more interested in obtaining lower or upper confidence limits than in obtaining two-sided confidence intervals. Specifically, independent public accountants are focused on estimating the lower confidence bound for the total audited amount to avoid overestimation and potential legal liability. Stringer and Kaplan (see [1,2]) have demonstrated that accounting populations are highly positively skewed, with considerable diversity in the characteristics of error amounts across subsystems. There are several distributions that also exhibit the same form of the distribution observed in accounting populations. These include the Gamma, Log-normal, Weibull, and Beta distributions. The error rates are usually very low, which render many existing statistical procedures inappropriate for estimating and hypothesis testing of error rates and error amounts. There are two main types of audit tests where statistical sampling is advantageous. The first is a compliance test, used to determine the procedural error rate in a population of transactions. The second is a substantive test of details, aimed at evaluating the aggregate monetary error in a stated balance. Inferences about the total error amount are usually made using confidence intervals, which are closely related to hypothesis testing. The decision-making process in auditing is treated as a problem of testing statistical hypotheses about admissibility of the total or the mean accounting errors. This approach allows auditors to control both the significance level (risk of incorrect rejection) and the type II error probability (risk of incorrect acceptance). Substantive tests of details verify the accuracy of recorded monetary values in financial statements, providing direct evidence regarding the accuracy of the total recorded values. Auditors may apply substantive tests extensively or use compliance tests to evaluate the efficiency and effectiveness of internal controls in mitigating material errors. In compliance tests, the variable of interest is the error rate (proportion of transactions for which the internal control operates wrongly). Samples of transactions are used to infer this error rate.

2. A Mixture of Probability Distributions as a Model for Generating Accounting Values

Wywiał in [3] proposed the following model. Let U be the population of accounting documents of size N with given accounting totals. Some of the documents contain errors. In the population U, there are given accounting totals (values) x i for each element i U . Let x T = [ x 1 , , x N ] R + N be the observation of a random vector X T = [ X 1 , , X N ] . We denote the true book values (without errors) by y i , i U and let y T = [ y 1 , , y N ] R + N be the random vector observation Y T = [ Y 1 , , Y N ] . Vector of accounting values contaminated by errors w T = [ w 1 , , w N ] will be an observation of the random vector W T = [ W 1 , , W N ] . Finally, let Z T = Z 1 , , Z N , where Z i = 0 ( Z i = 1 ) if X i = Y i ( X i Y i ) .
In practice, all values of X are known before auditing process. Observations x of X are treated as a specific auxiliary data. Auditing process leads to observation of values Z i , Y i and W i , i U . Let X ¯ = 1 N i U X i , Y ¯ = 1 N i U Y i , W ¯ = 1 N i U W i . Their values will be denoted x ¯ , y ¯ , w ¯ .
Let an auditor arbitrarily select a sample s of size n from U. Hence, X s is the subvector of X, n N . The random vector X s is observed in s where the objects are controlled. After the auditing process, the sample s is split into two disjoint sub-samples s 0 and s 1 where s = s 0 s 1 . The set s 1 is of size n 1 = k and the set s 0 is of size n 0 = n k . In the sub-sample s 0 , there are observed accounting amounts without errors. Before the auditing process, we have observations of the following data:
X = X i : i U = X s , X U s ,
where
X s = X i : i s , X U s = X i : i U s .
After the auditing process, we have observations of the following data:
T U = T s , X U s , T s = ( X i , Z i ) : i s = ( Y s 0 , W s 1 ) .
Values T, T U , X, X s , X U s , Y s 0 and W s 1 are denoted, respectively, by t, t U , x, x s , x U s , y s 0 and w s 1 . In the following work, we assume that y s 0 = y s and w s 1 = w s .
Let τ = E ( X ¯ Y ¯ ) be the expected mean accounting error. Audit purpose is inference on τ or on the expected total accounting error N τ = E ( i U X i i U Y i ) .
Let F 0 ( y | θ 0 ) be the probability distribution function of the random variable Y, whose values are true accounting and θ 0 Θ 0 where Θ 0 is the parameter space. The distribution function of W is denoted by F 1 ( w | θ 1 ) , where θ 1 Θ 1 . Moreover, let Θ = Θ 0 Θ 1 . We assume that an accounting errors appears with probability p. We can write Z = 1 when an accounting error occurs P ( Z = 1 ) = p and Z = 0 when it does not occur P ( Z = 0 ) = 1 p . According to the well-known total probability theorem we have: F x = F x | Z = 0 P Z = 0 + F x | Z = 1 P ( Z = 1 ) and finally
F ( x | θ ) = ( 1 p ) F 0 ( x | θ 0 ) + p F 1 ( x | θ 1 ) ,
where θ = θ 0 θ 1 and θ Θ = Θ 0 Θ 1 is the parameter space. Hence, the probability distribution of the observed accounting amounts is a mixture of the distribution function F 0 ( x | θ 0 ) of the true amounts and the distribution function F 1 ( x | θ 1 ) of the amounts contaminated by errors. When the random variables Y and W are continuous, by differentiating both sides of Equation (1) we have
f ( x | θ ) = ( 1 p ) f 0 ( x | θ 0 ) + p f 1 ( x | θ 1 ) .
Therefore, the probability density of the observed accounting amounts is a mixture of density f 0 ( x | θ 0 ) of the true amounts and density f 1 ( x | θ 1 ) of the amounts contaminated by errors. Let R and Y be independent and R is the accounting error. Hence W = Y + R , X = Y + Z R , X = ( 1 Z ) Y + Z W . The basic moments of the random variable X are:
E ( X ) = ( 1 p ) E ( X | Z = 0 ) + p E ( X | Z = 1 ) = ( 1 p ) E ( Y ) + p E ( W ) ,
V X = p 1 p ( E W E Y ) 2 + p V W + 1 p V Y .

2.1. A Mixture of Gamma Probability Distributions as a Model for Generating Accounting Values

The well-known gamma probability distribution we denote by G ( α , β ) , where parameters α > 0 and β > 0 are called scale and shape parameters. The shape of gamma density distribution does not depend on the scale parameter because its skewness and kurtosis coefficients are equal to 2 β and 6 β , respectively. Wywiał in [3] considered the model based on a mixture of gamma distributions. Let Y G ( a , c ) and R G ( b , c ) be independent random variables. The advantage of this model is that the density function for the sum of gamma distributions can be determined. Based on the above assumption, the random variable W = Y + R G ( a + b , c ) . Using the previous considerations, we obtain
f ( x | a , b , c ) = ( 1 p ) f 0 ( x | a , c ) + p f 1 ( x | a , b , c ) ,
where
f 1 x | a , b , c = c a + b Γ ( a + b ) x a + b 1 e c x , x > 0 ,
f 0 x | a , c = c a Γ ( a ) x a 1 e c x , x > 0 .
From Formulas (3) and (4) we obtain
E X = a + p b c , V X = a + p b + p ( 1 p ) b 2 c 2 .
For more on the use of gamma decomposition to model accounting values, see the articles [4,5]. The book amounts are treated as values of a random variable which distribution is a mixture of the distributions of correct amount and the distribution of the true amount contaminated by error. Distributions of correct amount and true amount contaminated by error are right-skewed because small book amounts are more frequent than large book amounts. It is convenient to assume that the book values are additive function of true accounting amounts and accounting errors. Hence, we can expect that the above proposed quite simple model describes accounting data well.

2.2. A Mixture of Poisson Probability Distributions as a Model for Generating Accounting Values

We assume that the variable Y P o i s ( a ) (true values) and R P o i s ( b ) (accounting error), where P o i s ( λ ) is a Poisson distribution with probability function
P k = λ k e λ k ! for k = 0 , 1 , 2 ,
If variables Y and R are independent, then the random variable W = Y + R P o i s ( a + b ) .
P ( x ) = ( 1 p ) P 0 ( x | a ) + p P 1 ( x | a , b ) ,
P 1 ( x | a , b ) = ( a + b ) x e a b x ! , x = 0 , 1 , 2 , , P 0 ( x | a ) = ( a ) x e a x ! , x = 0 , 1 , 2 .
From Formulas (3) and (4) we obtain
E X = ( 1 p ) a + p ( a + b ) = a + p b , V X = a + b p ( b + 1 p b ) .

3. Sequential Tests Based on the Ratio of the Likelihood Function

The set of issues involved in proceeding sequentially in verifying statistical hypotheses is called sequential analysis (see [6]). Sequential analysis was created by Wald (see [7]). The sequential approach to statistical inference was the subject of systematic research during the Second World War. This research was concerned with the quality of munitions and was conducted by the Statistical Research Group at Columbia University. Data in sequential analysis are used both to decide when to end an observation (data collection) and to draw actual conclusions (regarding the parameter being estimated or hypotheses being tested). The sequential method is applicable in auditing when assessing the internal control system in compliance testing when verifying hypotheses regarding the proportion of accounting errors. In their doctoral thesis, Byekwaso in [8] used a polynomial Dirichlet model to estimate and test hypotheses about error rates in accounting populations using Bayesian methods with sequential stopping rules.

3.1. Sequential Ratio Test

Let f ( x , θ ) be the density (probability) function with θ = [ τ , γ ] , where τ is subject to verification, and γ is the set of parameters indifferent from the point of view of hypothesis testing H 0 ( τ = τ 0 ) against H 1 ( τ = τ 1 ) . We propose a hypothesis H 0 ( θ = θ 0 ) . Hypothesis H 0 is verified by comparing with the Hypothesis H 1 ( θ = θ 1 ) , where θ 1 is some specific value different from θ 0 , and θ j = [ τ j , γ ] , j = 0 , 1 . During the Sequential Probability Ratio Test (SPRT), we progressively sample one or more items at each stage of the sequential hypothesis verification. Depending on the predetermined errors of the α and β , the numbers A = 1 β α and B = β 1 α satisfying the condition 0 < B < 1 < A . The elements are drawn into a simple sample. For the first drawn observation x 1 , let us determine the value of the ratio f ( x 1 , θ 1 ) f ( x 1 , θ 0 ) . If
  • f ( x 1 , θ 1 ) f ( x 1 , θ 0 ) A then we reject the Hypothesis H 0 in favor of the hypothesis H 1 .
  • f ( x 1 , θ 1 ) f ( x 1 , θ 0 ) B then we accept the Hypothesis H 0 .
  • B < f ( x 1 , θ 1 ) f ( x 1 , θ 0 ) < A then we select the next element to sample.
In sequential proceedings, it is more convenient to operate with logarithms of numbers A and B and random variables
z i = ln f ( x i , θ 1 ) f ( x i , θ 0 ) , i = 1 , 2 , , n .
If in the sample we have m elements ( m 1 ) then if
  • i = 1 m z i ln A then we reject the Hypothesis H 0 in favour of the hypothesis H 1 ;
  • i = 1 m z i ln B then we accept the Hypothesis H 0 ;
  • ln B < i = 1 m z i < ln A then we select the next element to sample.
Let X be a random variable with parameter θ and of density f ( x , θ ) . We consider the following simple hypotheses about the value of the parameter θ
H 0 : θ = θ 0 , H 1 : θ = θ 1 > θ 0 .
The verification of the null hypothesis (14) consists of calculating on the basis of an n elementary random sample ( X 1 , , X n ) the value of the statistic:
l n = ln f ( X 1 , , X n , θ 1 ) f ( X 1 , , X n , θ 0 ) = i = 1 n ln f ( x i , θ 1 ) f ( x i , θ 0 ) ,
and comparing it with the designated constants A and B. The SPRT test can be extended to versions in which the unknown parameters are replaced by their maximum-likelihood estimators ([9]) and then the previously described sequential procedure is applied
l n = i = 1 n ln f ( x i , θ ^ 1 ) f ( x i , θ ^ 0 ) ,
θ j ^ = arg max θ j Θ j i = 1 n f ( x i , θ j ) , j = 0 , 1 ,
where Θ j = [ τ j , γ ] , θ j ^ = [ τ j , γ ^ ] .

3.2. The Expected Sample Size

The approximate expected value of the sample size for a sequential ratio test is given by the formula ([6]):
E θ ( n ) = L ( θ ) ln B + ( 1 L ( θ ) ) ln A E θ ( z ) ,
where: z = ln f ( x , θ 1 ) f ( x , θ 0 ) , L ( θ ) is the OC function of the sequential ratio test. The OC function describes the probability of accepting the Hypothesis H 0 ([10]). The functions L ( θ ) are written with the formula
L ( θ ) = A h 0 ( θ ) 1 A h 0 ( θ ) B h 0 ( θ ) .
Function h 0 ( θ ) for a variable of x of continuous type satisfies the following condition
f ( x , θ 1 ) f ( x , θ 0 ) h 0 ( θ ) f ( x , θ ) d x = 1 .
Function h 0 ( θ ) for a variable of x of discrete type satisfies the following condition
j f ( x j , θ 1 ) f ( x j , θ 0 ) h 0 ( θ ) f ( x j , θ ) = 1 .
The expected value E ( n ) is determined from the formula
E ( n ) = 1 2 ( 1 α ) ln B + α ln A E α ( n ) + β ln B + ( 1 β ) ln A E β ( n ) .
We use the above expressions to determine the expected sample size for tests on the mean value of accounting errors. We test the hypothesis
H 0 : τ = τ 0 , H 1 : τ = τ 1 > τ 0 ,
where τ denotes the mean value of the accounting errors determined from the mixture of Poisson or gamma distribution.
The expected value E α ( n ) is determined by the formula:
E α ( n ) = x = 0 P 0 ( X = x ) ln P 1 ( X = x ) P 0 ( X = x ) ,
where:
P 1 ( X = x ) = e a ( 1 p ) a x x ! + p e a τ 1 p ( a + τ 1 p ) x x ! ,
and
P 0 ( X = x ) = e a ( 1 p ) a x x ! + p e a τ 0 p ( a + τ 0 p ) x x ! .
The expected value E β ( n ) is determined by the formula:
E β ( n ) = x = 0 P 1 ( X = x ) ln P 1 ( X = x ) P 0 ( X = x ) .
The expected value E α ( n ) for a mixture of gamma distributions is determined from the formula:
E α ( n ) = 0 g 0 ( x ) ln g 1 ( x ) g 0 ( x ) d x .
The expected value E β ( n ) is determined from the formula:
E β ( n ) = 0 g 1 ( x ) ln g 1 ( x ) g 0 ( x ) d x ,
where:
g 0 ( x ) = ( 1 p ) c a e c x x a 1 Γ ( a ) + p c a + ( c τ 0 p ) e c x x a + ( c τ 0 p ) 1 Γ a + c τ 0 p ,
and
g 1 ( x ) = ( 1 p ) c a e c x x a 1 Γ ( a ) + p c a + ( c τ 1 p ) e c x x a + ( c τ 1 p ) 1 Γ a + c τ 1 p .
For fixed parameters a, c, τ i , p expected values E α ( n ) i E β ( n ) can be calculated by suitable numerical integration methods using, for example, Mathematica.

3.3. Simulation-Based Sample Size Determination

The purpose of the simulation study was to determine the average sample size n ¯ and simulation probabilities of accepting and rejecting the Hypothesis H 0 . The determination of the average sample size using the Monte Carlo method was dealt with by Boiroju (see [11]). The purpose of an audit may be to certify the accuracy of a financial settlement, in which case a situation arises where the auditor focuses more on minimizing risk β rather than α . Therefore, n ¯ was estimated under the assumption that the observed data in the sample were generated under the assumption that the Hypothesis H 1 is true.
1.
For fixed values of α and β we determine a g = ln 1 β α and b g = ln β 1 α . To the variables L P , j, k are assigned the values L P = 1000 , j = k = n = 0 .
2.
For i = 1 , 2 , , L P we generate a dataset of size N = 4000 from a mixture of Poisson distributions e a ( 1 p ) a x x ! + p e a b ( a + b ) x x ! with the following parameters: a = 1000 , p = 0.1 . The parameter b is determined from the formula b = τ 1 p .
3.
We draw a sample size n 0 = 0.01 N , n = n + n 0 . We divide the population into subsets y s , w s , x U s .
4.
Estimate the mixture parameters a 1 and p 1 provided b = τ 1 p based on the credibility function. For the parameters obtained, we calculate the logarithm of the credibility function l 1 .
5.
Estimate the mixture parameters a 0 and p 0 provided b = τ 0 p based on the credibility function. For the parameters obtained, we calculate the logarithm of the credibility function l 0 .
6.
Let us calculate the value of the test statistic S i = l 1 l 0 .
7.
Repeat steps 3 to 6 until b g < S i < a g and n 0.82 N .
8.
If S i a g then k = k + 1 , which results in the rejection of the Hypothesis H 0 in favour of the Hypothesis H 1 .
9.
If S i b g then j = j + 1 , which means that we reject the Hypothesis H 1 even though the hypothesis is true.
10.
For each i we determine the number n i of sample elements necessary to decide whether to accept or reject the hypothesis H 0 . We stop the algorithm when n i = 0.82 N .
11.
Determine the mean sample size n ¯ = 1 L P i = 1 L P n i , the standard deviation of the simulated sample sizes S ( n ¯ ) = 1 L P 1 i = 1 L P ( n i n ¯ ) 2 , the simulated probability of accepting the Hypothesis H 0 : β ^ = j L P , simulation probability of rejecting the Hypothesis H 0 : m ^ = 1 β ^ = k L P and simulation probability κ ^ = m ^ + β ^ of terminating a sequential procedure below a threshold n 0.82 N .
For a mixture of Poisson distributions, the following hypothesis was tested:
H 0 : τ = τ 0 = 5 , H 1 : τ = τ 1 > τ 0 .
Details of how the algorithm works are shown in Figure 1.
The n l m function uses the Newton–Raphson algorithm and is included in the R package. Simulations were performed for three mixing values p = 0.1 , p = 0.2 , p = 0.3 for a mixture of Poisson distribution. The corresponding parameters were determined for the mixing parameter b = τ 1 p .
Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 show the results E β ( n ) obtained from Formulas (24)–(27) for a mixture of Poisson distributions. The values of E β ( n ) have been determined in Mathematica. For a mixture of gamma distributions, the following hypothesis was tested:
H 0 : τ = τ 0 = 50 , H 1 : τ = τ 1 > τ 0 .
Parameters were used to generate a mixture of gamma distributions: a = 2 , c = 0.002 . For the mixing parameters p = 0.1 , p = 0.2 , p = 0.3 the corresponding parameters were determined b = c τ 1 p . The results of the simulation studies are presented in Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12.
The average sample size decreases with increasing τ 1 . The smallest sample sizes are obtained for data from a mixture of Poisson distributions generated with the parameter p = 0.1 . For α = β = 0.2 and for τ 1 = 1.3 τ 0 the mean sample value is 149 (3.7% of the population size). For this parameter, the arithmetic mean of the contributing error values w ¯ = a + b is the largest. Similarly for p = 0.2 the mean sample value is 307 (7.7% of the population size) and for the p = 0.3 the mean sample value is 502 (12.6% of the population size).
The sample mean values obtained are lower than the values obtained using Formulas (24)–(27). Comparing values E ( n ) and n ¯ in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 only makes sense if the value of κ ^ is close to unity. This means that only then have almost all iterations of simulated sample sizes needed finished before the population size is exhausted. For a fixed value of α , changing the value of β causes negligible changes in the mean of sample size. The simulation probability β ^ of accepting the Hypothesis H 0 in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 is smaller than assumed. Reasons for this may be that the critical values of the Wald test are approximated by ( a g , b g ) or the method of drawing, in which we draw 1% N population elements rather than individual elements, i.e., in our case there is a sequential draw of 40 population elements.
Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 show that the average sample size decreases with increasing τ 1 . The smallest sample sizes are obtained for data from a mixture of gamma distributions generated with the parameter p = 0.1 . For α = β = 0.2 and for τ 1 = 1.3 τ 0 the average sample size is 1345 (33.7% population size). For this parameter, the arithmetic mean of the contributing error values w ¯ = a + b c is the largest. Similarly for p = 0.2 sample mean value is 1848 (46.2% population size), and for p = 0.3 average sample value is 2280 (57% population size). For p = 0.1 , α = β = 0.05 and for τ 1 = 1.3 τ 0 the average sample value is 2419 (60.5% population size). Similarly for p = 0.2 average sample size is 2954 (73.9% population size), and for p = 0.3 average sample size is 3121 (78% population size).

4. Conclusions

This paper presents the application of a sequential test based on the likelihood ratio function for audit studies. The main objective of the simulation studies was to determine the expected sample size.
Section 3.2 outlines formulas for numerically calculating the expected sample size when testing the hypothesis of average accounting errors using a mixture of Poisson distributions as the underlying model for accounting values. The expected sample sizes for the mixture of Poisson distributions were obtained both analytically and through simulation. For the mixture of gamma distributions, the expected sample sizes values were determined exclusively through simulation.
The values of E β ( n ) obtained in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 indicate that this sequential test is practical when the mean error size τ 1 significantly exceeds τ 0 . Otherwise, the expected sample size E β ( n ) becomes impractically large compared to the typical population size of accounting documents. The simulation studies were conducted only for selected hypothetical and alternative parameters (average audit errors), due to the time-consuming nature of the simulations. It is possible that different parameter settings could yield more favorable outcomes in terms of efficiency.

Author Contributions

Conceptualization, G.S.; methodology, G.S. and M.P.; software, M.P.; validation, G.S. and M.P.; formal analysis, G.S.; investigation, G.S.; resources, G.S.; data curation, G.S. and M.P.; writing—original draft preparation, G.S. and M.P.; writing—review and editing, G.S. and M.P.; visualization, M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kaplan, R.S. A Stochastic Model For Auditing. J. Account. Res. 1973, 11, 38–46. [Google Scholar] [CrossRef]
  2. Stringer, K.W. Practical Aspects of Statistical Auditing. In Proceedings of the Business and Economic Statistics Section of the American Statistical Association, Cleveland, OH, USA, 4–7 September 1963; pp. 405–411. [Google Scholar]
  3. Wywiał, J.L. Application of Two Gamma Distributions Mixture to Financial Auditing. Sankhya B 2018, 80, 1–18. [Google Scholar] [CrossRef]
  4. Frost, P.A.; Tamura, H. Accuracy of Auxiliary Information Interval Estimation in Statistical Auditing. J. Account. Res. 1986, 24, 57–75. [Google Scholar] [CrossRef]
  5. Tamura, H. Estimation of rare errors using judgement. Biometrika 1988, 75, 1–9. [Google Scholar] [CrossRef]
  6. Fisz, M. Elements of sequential analysis. In Probability and Mathematical Statistics; PWN: Warszawa, Poland, 1967; pp. 478–503. (In Polish) [Google Scholar]
  7. Wald, A. Sequential Tests of Statistical Hypotheses. Ann. Math. Stat. 1945, 16, 117–186. [Google Scholar] [CrossRef]
  8. Byekwaso, S. Bayesian Sequential Inference for Error Rates and Error Amounts in Accounting Data. Ph.D. Thesis, The American University, Washington, DC, USA, 1994. [Google Scholar]
  9. Gölz, M.; Fauss, M.; Zoubir, A. A bootstrapped sequential probability ratio test for signal processing applications. In Proceedings of the 2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Curacao, 10–13 December 2017; pp. 1–5. [Google Scholar]
  10. Marek, T.; Noworol, C. Wprowadzenie do Analizy Sekwencyjnej; Wydawnictwo Uniwersytetu Jagiellońskiego: Kraków, Poland, 1989. [Google Scholar]
  11. Boiroju, N.K. A study on bootstrap sequential probability ratio test. In Proceedings of the VI International Symposium on Optimization and Statistics, Aligarh, India, 29–31 December 2008; Aligarh Muslim University: Aligarh, India, 2013. [Google Scholar]
Figure 1. Algorithm for simulation-based determination of sample sizes and hypothesis rejection probabilities.
Figure 1. Algorithm for simulation-based determination of sample sizes and hypothesis rejection probabilities.
Entropy 26 00998 g001
Figure 2. Simulated sample sizes-mixture of Poisson distributions p = 0.1 , τ 0 = 5 (based on Table 1 and Table 2).
Figure 2. Simulated sample sizes-mixture of Poisson distributions p = 0.1 , τ 0 = 5 (based on Table 1 and Table 2).
Entropy 26 00998 g002
Figure 3. Simulated sample sizes for different parameter mixing-mixture Poisson distributions for α = β = 0.05 , τ 0 = 5 (based on Table 1 and Table 6).
Figure 3. Simulated sample sizes for different parameter mixing-mixture Poisson distributions for α = β = 0.05 , τ 0 = 5 (based on Table 1 and Table 6).
Entropy 26 00998 g003
Figure 4. Simulated sample sizes with fixed α and variable β -mixture of Poisson distributions for p = 0.1 , τ 0 = 5 (based on Table 1 and Table 2).
Figure 4. Simulated sample sizes with fixed α and variable β -mixture of Poisson distributions for p = 0.1 , τ 0 = 5 (based on Table 1 and Table 2).
Entropy 26 00998 g004
Figure 5. Simulated sample sizes-mixture of gamma distributions for p = 0.1 , τ 0 = 50 (based on Table 7 and Table 8).
Figure 5. Simulated sample sizes-mixture of gamma distributions for p = 0.1 , τ 0 = 50 (based on Table 7 and Table 8).
Entropy 26 00998 g005
Figure 6. Simulated sample sizes for different parameter mixing-mixture gamma distributions for α = β = 0.05 , τ 0 = 50 (based on Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12).
Figure 6. Simulated sample sizes for different parameter mixing-mixture gamma distributions for α = β = 0.05 , τ 0 = 50 (based on Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12).
Entropy 26 00998 g006
Table 1. Sequential test results. Mixture of Poisson distributions p = 0.1 , τ 0 = 5 .
Table 1. Sequential test results. Mixture of Poisson distributions p = 0.1 , τ 0 = 5 .
τ 1 α β n ¯ E β ( n ) S ( n ¯ ) β ^ κ ^
1.01 τ 0 0.050.052971528,5718080.0690.139
0.050.12987473,9607800.0570.141
0.050.22922380,2618250.120.186
0.10.052961473,9607950.0460.156
0.10.12890350,6088620.1040.191
0.10.22827271,8139110.1280.223
0.20.052902267,5988570.0690.185
0.20.12866228,5288680.0850.216
0.20.22738165,9079700.1220.278
1.03 τ 0 0.050.05300657,8677380.0590.139
0.050.1288718,4108850.1040.185
0.050.2286741,6308820.1430.209
0.10.05291451,8888560.0630.175
0.10.1285538,3849010.0950.211
0.10.2273229,7579750.1590.276
0.20.05286529,2968700.0750.213
0.20.1282725,0188900.0940.238
0.20.2267618,1639920.160.322
1.05 τ 0 0.050.05295620,5318080.0590.155
0.050.1292318,4108360.0970.181
0.050.2275914,7709410.1550.275
0.10.05281418,4109200.0780.23
0.10.1287213,6188550.0910.22
0.10.2271510,5589920.1440.28
0.20.05277210,3949250.0760.276
0.20.1268988769690.0940.323
0.20.22568644410200.1750.391
1.075 τ 0 0.050.05285389639050.0730.204
0.050.1281380379180.1050.233
0.050.2273764489470.1510.289
0.10.05279480379150.0650.268
0.10.1270359459790.0930.301
0.10.22539460910650.1570.381
0.20.052431453710940.0760.445
0.20.12343387511110.1170.492
0.20.22172281311290.150.568
Table 2. Sequential test results. Mixture of Poisson distributions p = 0.1 , τ 0 = 5 .
Table 2. Sequential test results. Mixture of Poisson distributions p = 0.1 , τ 0 = 5 .
τ 1 α β n ¯ E β ( n ) S ( n ¯ ) β ^ κ ^
1.1 τ 0 0.050.05272649549610.0890.293
0.050.12618444210210.1040.354
0.050.22518356410900.1640.383
0.10.052505444210390.0720.428
0.10.12444328610660.1040.466
0.10.22241254711250.160.546
0.20.051936250811920.0760.644
0.20.11884214212050.0950.661
0.20.21663155511950.1630.736
1.15 τ 0 0.050.052146212811410.0650.591
0.050.12059190811510.1030.624
0.050.21869153111880.1230.69
0.10.051610190811790.0630.77
0.10.11516141211580.0810.786
0.10.21351109411540.1450.826
0.20.051078107710610.0630.898
0.20.199192010350.060.895
0.20.27916688660.1310.953
1.2 τ 0 0.050.051248115910880.0510.865
0.050.11237103910770.0810.873
0.050.210458349960.1020.913
0.10.058968729430.0420.939
0.10.17847688750.0590.957
0.10.26865968140.110.956
0.20.055805867620.0370.966
0.20.14925016870.0480.982
0.20.23933635470.1060.993
1.3 τ 0 0.050.053584845270.0240.996
0.050.13404345090.030.991
0.050.23243484710.050.996
0.10.052683644330.0240.997
0.10.12363213520.0270.999
0.10.22152493240.0441
0.20.051802452920.0221
0.20.11682092370.0331
0.20.21491521960.051
Table 3. Sequential test results. Mixture of Poisson distributions p = 0.2 , τ 0 = 5 .
Table 3. Sequential test results. Mixture of Poisson distributions p = 0.2 , τ 0 = 5 .
τ 1 α β n ¯ E β ( n ) S ( n ¯ ) β ^ κ ^
1.01 τ 0 0.050.0532431,338,2502580.0110.026
0.050.132311,199,9903100.0140.031
0.050.23226962,7572900.0340.05
0.10.0532001,007,0803840.0110.05
0.10.13235887,6832790.0150.036
0.10.23191688,1863960.0330.063
0.20.053213677,5153360.0170.047
0.20.13195578,5943910.0270.06
0.20.23178420,0494060.040.077
1.03 τ 0 0.050.053230147,4703240.0170.032
0.050.13196132,2334290.0270.047
0.050.23195106,0924050.0410.063
0.10.053211110,9763690.0140.043
0.10.1320597,8183930.0220.047
0.10.2319075,8354100.0380.062
0.20.05319374,6593900.020.064
0.20.1319563,7583930.030.062
0.20.2316546,2874460.0370.089
1.05 τ 0 0.050.05322852,6523350.0110.028
0.050.1319947,2123900.0260.054
0.050.2317637,8784450.0520.069
0.10.05320039,6223760.020.056
0.10.1320734,9253940.0210.046
0.10.2316427,0764680.0570.085
0.20.05318226,6564360.0210.072
0.20.1318722,7644040.020.074
0.20.2310916,5265310.0620.127
1.075 τ 0 0.050.05316523,1614680.0160.074
0.050.1319220,7684010.0360.061
0.050.2314916,6624670.0650.099
0.10.05318217,4294230.0210.073
0.10.1318115,3634270.0290.074
0.10.2312111,9105450.070.112
0.20.05300911,7256610.0290.199
0.20.1299610,0136750.0250.203
0.20.2277972698830.0860.317
Table 4. Sequential test results. Mixture of Poisson distributions p = 0.2 , τ 0 = 5 .
Table 4. Sequential test results. Mixture of Poisson distributions p = 0.2 , τ 0 = 5 .
τ 1 α β n ¯ E β ( n ) S ( n ¯ ) β ^ κ ^
1.1 τ 0 0.050.05317212,8954100.0240.09
0.050.1315711,5624420.0310.095
0.050.2305092776250.10.16
0.10.05303697046320.0190.181
0.10.1303785536160.0350.181
0.10.2287866317920.1180.275
0.20.05265365289780.0190.364
0.20.1264955759560.040.394
0.20.22314404811250.1290.524
1.15 τ 0 0.050.05281056158240.0280.321
0.050.1275050358680.0590.36
0.050.22448403910810.1560.472
0.10.052379422510840.030.502
0.10.12316372411150.0480.519
0.10.21951288711860.1640.67
0.20.051790284212280.0220.703
0.20.11641242712160.0560.747
0.20.21373176311600.1720.827
1.2 τ 0 0.050.052060309511360.0420.657
0.050.11958277511560.0670.68
0.050.21705222611640.1750.764
0.10.051670232912080.040.75
0.10.11570205311800.0760.794
0.10.21244159110710.1660.887
0.20.051176156711170.0440.872
0.20.11023133810390.0820.907
0.20.27839728910.1540.949
1.3 τ 0 0.050.0594113219410.0290.945
0.050.185111858770.0720.959
0.050.27049507700.1350.977
0.10.056679947980.0320.965
0.10.16328767390.0790.978
0.10.24616805840.1250.988
0.20.054506696230.0340.987
0.20.13775715220.0630.994
0.20.23074153910.1350.998
Table 5. Sequential test results. Mixture of Poisson distributions p = 0.3 , τ 0 = 5 .
Table 5. Sequential test results. Mixture of Poisson distributions p = 0.3 , τ 0 = 5 .
τ 1 α β n ¯ E β ( n ) S ( n ¯ ) β ^ κ ^
1.01 τ 0 0.050.0532731,795,9901150.0010.004
0.050.132701,610,4401380.0030.006
0.050.232641,292,0601880.0040.008
0.10.0532661,351,5401640.0030.008
0.10.132551,191,3102230.0070.014
0.10.23256923,5752230.0050.012
0.20.053258909,2552060.0060.013
0.20.13264776,4981830.0040.008
0.20.23262563,7241820.0050.01
1.03 τ 0 0.050.053268198,7881500.0050.007
0.050.13268178,2501660.0020.006
0.050.23256143,0112240.0070.013
0.10.053262149,5951940.0030.009
0.10.13263131,8591880.0030.009
0.10.23264102,2251840.0060.009
0.20.053273100,6401210.0020.004
0.20.1326885,9461460.0010.007
0.20.2325662,3952060.0060.014
1.05 τ 0 0.050.05327371,2881210.0010.004
0.050.1326563,9221720.0050.008
0.050.2325843,8002080.010.012
0.10.05326253,6461920.0040.009
0.10.1327047,2861450.0020.006
0.10.2326736,6591520.0060.009
0.20.05325336,0902280.0050.015
0.20.1325430,8212250.0040.016
0.20.2324122,3752560.0110.029
1.075 τ 0 0.050.05327031,5301440.0030.006
0.050.1326128,2721860.0070.012
0.050.2325322,6832260.0180.019
0.10.05326723,7271750.0030.009
0.10.1326520,9141550.0060.013
0.10.2324716,2142410.0.0120.025
0.20.05320215,9623590.0020.06
0.20.1321013,6323160.0040.063
0.20.2313498964770.0310.114
Table 6. Sequential test results. Mixture of Poisson distributions p = 0.3 , τ 0 = 5 .
Table 6. Sequential test results. Mixture of Poisson distributions p = 0.3 , τ 0 = 5 .
τ 1 α β n ¯ E β ( n ) S ( n ¯ ) β ^ κ ^
1.1 τ 0 0.050.05326717,6501500.0030.01
0.050.1325115,8262250.0080.021
0.050.2318512,6974200.0480.063
0.10.05318813,2823660.0030.071
0.10.1319811,7073670.0190.067
0.10.2277090767920.1670.41
0.20.05300889356590.0080.194
0.20.1296376307070.0080.222
0.20.2278955398450.0810.341
1.15 τ 0 0.050.05310877685000.0120.154
0.050.1301069656330.0430.202
0.050.2289555887850.1090.261
0.10.05287458457790.0140.287
0.10.1277651528540.0410.345
0.10.22364399410690.1260.533
0.20.052381393211120.0140.491
0.20.12324335811150.0420.538
0.20.21952243812060.1510.663
1.2 τ 0 0.050.05264243269440.0230.417
0.050.1257338799660.0470.443
0.050.22358311210850.1330.534
0.10.052280325611000.0190.562
0.10.12181287011490.0570.598
0.10.21890222511650.140.713
0.20.051531219011550.0270.809
0.20.11574187011880.0730.798
0.20.21231135811110.1660.874
1.3 τ 0 0.050.051635188511490.0420.804
0.050.11436169011160.0940.846
0.050.21215135610330.150.909
0.10.051208141810990.0320.881
0.10.11110125010390.0870.912
0.10.2105396911210.1580.859
0.20.057919549330.0560.951
0.20.17108158300.0980.97
0.20.25025916450.1760.988
Table 7. Sequential test results. Mixture of distributions Gamma p = 0.1 , τ 0 = 50 .
Table 7. Sequential test results. Mixture of distributions Gamma p = 0.1 , τ 0 = 50 .
τ 1 α β n ¯ S ( n ¯ ) β ^ κ ^
1.01 τ 0 0.050.0530676420.0670.139
0.050.130446550.0940.164
0.050.229786900.1340.224
0.10.0530476620.0710.151
0.10.130226720.0760.172
0.10.229727310.1210.215
0.20.0529837330.0750.191
0.20.130056730.0820.195
0.20.229277510.1260.255
1.03 τ 0 0.050.0530107130.080.165
0.050.130336780.0910.168
0.050.229747080.1350.215
0.10.0530146740.0890.179
0.10.130106960.0830.183
0.10.229517400.1360.226
0.20.0529567130.0950.232
0.20.129637330.0950.229
0.20.228957920.110.249
1.05 τ 0 0.050.0530007170.0790.176
0.050.130197010.1080.171
0.050.229357460.1550.241
0.10.0530076990.0720.185
0.10.129896930.1020.213
0.10.229197590.160.273
0.20.0529367520.0880.232
0.20.129527290.1090.238
0.20.228617900.1570.298
1.075 τ 0 0.050.0530176870.0870.165
0.050.129997060.1060.184
0.050.229687180.1430.231
0.10.0529946940.0780.2
0.10.129637560.1070.208
0.10.229407670.1310.238
0.20.0528857930.0930.272
0.20.128917470.1190.286
0.20.228328190.1560.326
Table 8. Sequential test results. Mixture of distributions Gamma p = 0.1 , τ 0 = 50 .
Table 8. Sequential test results. Mixture of distributions Gamma p = 0.1 , τ 0 = 50 .
τ 1 α β n ¯ S ( n ¯ ) β ^ κ ^
1.1 τ 0 0.050.0530216740.0970.176
0.050.130106560.1180.198
0.050.229637190.1310.234
0.10.0529797140.1040.215
0.10.129257790.1080.237
0.10.228598080.1630.288
0.20.0528917480.070.296
0.20.128887430.10.313
0.20.227257940.1770.437
1.15 τ 0 0.050.0529537570.0870.213
0.050.129867070.0980.213
0.050.229017380.1690.298
0.10.0528497990.0830.319
0.10.126968470.2270.454
0.10.227798220.1610.396
0.20.0526208780.080.48
0.20.125319330.110.504
0.20.224169230.1650.608
1.2 τ 0 0.050.0528617810.0710.329
0.050.128447780.0930.335
0.050.226928180.1860.458
0.10.0526558820.0750.46
0.10.126688440.0910.47
0.10.224419330.2040.59
0.20.0522649680.0640.644
0.20.121639800.090.712
0.20.219649560.190.794
1.3 τ 0 0.050.0524199260.0830.599
0.050.123699230.1070.632
0.050.221319260.1710.758
0.10.0520799690.0780.737
0.10.120139290.130.787
0.10.217899270.170.853
0.20.0516159960.0620.858
0.20.115219250.1050.903
0.20.213458630.1580.943
Table 9. Sequential test results. Mixture of distributions Gamma p = 0.2 , τ 0 = 50 .
Table 9. Sequential test results. Mixture of distributions Gamma p = 0.2 , τ 0 = 50 .
τ 1 α β n ¯ S ( n ¯ ) β ^ κ ^
1.01 τ 0 0.050.0531255320.0460.094
0.050.132363080.0170.027
0.050.231854430.0370.061
0.10.0532263400.0090.032
0.10.131994060.0240.055
0.10.231904080.0460.07
0.20.0531744480.0260.067
0.20.131884340.0230.061
0.20.231465080.0430.09
1.03 τ 0 0.050.0530616110.0680.127
0.050.131624860.0420.071
0.050.231704550.0490.071
0.10.0531684530.0330.079
0.10.131495090.0350.078
0.10.231185430.0690.103
0.20.0531464770.0370.095
0.20.131015720.0480.117
0.20.230736000.0610.136
1.05 τ 0 0.050.0530456540.0740.128
0.050.131784400.0350.065
0.050.231484790.0590.091
0.10.0531844160.0240.069
0.10.131484910.0340.092
0.10.231554980.0420.077
0.20.0531544830.030.086
0.20.130965760.0390.126
0.20.231065330.0730.134
1.075 τ 0 0.050.0530486320.0620.133
0.050.131714460.040.074
0.050.231045710.0810.118
0.10.0531385200.0430.09
0.10.131514910.0410.089
0.10.231095440.070.118
0.20.0531434870.0430.102
0.20.131005650.0470.117
0.20.229378780.0760.363
Table 10. Sequential test results. Mixture of distributions Gamma p = 0.2 , τ 0 = 50 .
Table 10. Sequential test results. Mixture of distributions Gamma p = 0.2 , τ 0 = 50 .
τ 1 α β n ¯ S ( n ¯ ) β ^ κ ^
1.1 τ 0 0.050.0530326620.0870.143
0.050.131505130.0440.077
0.050.231375110.0620.093
0.10.0531564870.0340.085
0.10.131225380.0480.105
0.10.230955820.0670.122
0.20.0531195270.0340.116
0.20.130666060.0480.142
0.20.230126550.0860.197
1.15 τ 0 0.050.0531544800.0320.09
0.050.131395100.050.095
0.050.230955650.0890.133
0.10.0531305150.0350.109
0.10.132641750.0470.012
0.10.230615980.070.163
0.20.0529876480.0350.237
0.20.129856550.0430.242
0.20.228857100.090.317
1.2 τ 0 0.050.0531345290.0340.103
0.050.131025520.0470.129
0.050.230635670.0940.195
0.10.0530296030.0350.195
0.10.130435920.0490.208
0.10.229866380.1150.407
0.20.0527738000.0350.379
0.20.127268090.0550.429
0.20.225788740.1310.507
1.3 τ 0 0.050.0529546390.0360.299
0.050.128816990.0640.339
0.050.227427990.1380.440
0.10.0530336110.0380.206
0.10.125958580.0690.513
0.10.223189380.1640.641
0.20.0521729910.0390.674
0.20.121069930.0740.712
0.20.218489950.1680.801
Table 11. Sequential test results. Mixture of distributions Gamma p = 0.3 , τ 0 = 50 .
Table 11. Sequential test results. Mixture of distributions Gamma p = 0.3 , τ 0 = 50 .
τ 1 α β n ¯ S ( n ¯ ) β ^ κ ^
1.01 τ 0 0.050.0532492410.0140.025
0.050.132442380.0220.036
0.050.232432440.0250.037
0.10.0532452350.0160.032
0.10.132561850.0170.032
0.10.232352420.0290.05
0.20.0532382440.0170.044
0.20.132362540.0240.048
0.20.232352380.0310.055
1.03 τ 0 0.050.0532402670.0160.032
0.050.132332450.0230.042
0.050.232043290.0490.076
0.10.0532302560.0250.048
0.10.132222770.0310.057
0.10.231933160.0570.085
0.20.0532123090.0230.068
0.20.132053280.0370.079
0.20.231873610.050.095
1.05 τ 0 0.050.0532412250.0260.043
0.050.132252690.0320.058
0.050.232083260.0480.064
0.10.0532222840.020.056
0.10.132162880.0320.068
0.10.232162750.0530.08
0.20.0532053280.030.074
0.20.132123020.0310.07
0.20.231803710.0450.097
1.075 τ 0 0.050.0532332530.0220.048
0.050.132093350.0390.06
0.050.232033300.0480.072
0.10.0532172960.0210.062
0.10.132113030.0350.067
0.10.231903490.0530.09
0.20.0532063210.0230.075
0.20.132043170.0350.08
0.20.231783550.0650.106
Table 12. Sequential test results. Mixture of distributions Gamma p = 0.3 , τ 0 = 50 .
Table 12. Sequential test results. Mixture of distributions Gamma p = 0.3 , τ 0 = 50 .
τ 1 α β n ¯ S ( n ¯ ) β ^ κ ^
1.1 τ 0 0.050.0532322630.0270.044
0.050.132202600.0420.064
0.050.231893450.0520.086
0.10.0532143230.0290.054
0.10.131993560.0320.066
0.10.231933390.0640.091
0.20.0531793890.0250.087
0.20.131823640.0040.095
0.20.231424340.0760.132
1.15 τ 0 0.050.0532222760.0330.054
0.050.132133080.0360.059
0.050.231693840.0740.102
0.10.0531973380.0360.08
0.10.131973380.040.074
0.10.231972960.0490.01
0.20.0531633760.0320.124
0.20.131623800.0430.123
0.20.231114600.0630.179
1.2 τ 0 0.050.0532142990.0290.056
0.050.132013220.0460.076
0.050.231543970.0850.119
0.10.0531903350.0260.099
0.10.131713650.0510.123
0.10.230854860.0920.19
0.20.0530145520.0370.248
0.20.130025650.050.263
0.20.229245940.0980.347
1.3 τ 0 0.050.0531214210.0360.17
0.050.130874840.0540.198
0.050.229975780.1210.271
0.10.0529545600.0310.31
0.10.129355980.0610.327
0.10.227757460.1060.415
0.20.0526128390.030.504
0.20.124508650.050.602
0.20.222809280.1340.673
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sitek, G.; Pleszczyński, M. Inferring About the Average Value of Audit Errors from Sequential Ratio Tests. Entropy 2024, 26, 998. https://doi.org/10.3390/e26110998

AMA Style

Sitek G, Pleszczyński M. Inferring About the Average Value of Audit Errors from Sequential Ratio Tests. Entropy. 2024; 26(11):998. https://doi.org/10.3390/e26110998

Chicago/Turabian Style

Sitek, Grzegorz, and Mariusz Pleszczyński. 2024. "Inferring About the Average Value of Audit Errors from Sequential Ratio Tests" Entropy 26, no. 11: 998. https://doi.org/10.3390/e26110998

APA Style

Sitek, G., & Pleszczyński, M. (2024). Inferring About the Average Value of Audit Errors from Sequential Ratio Tests. Entropy, 26(11), 998. https://doi.org/10.3390/e26110998

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop