THE EVIDENCE www.evidencejournals.com
Theory and Methods Evidence Synthesis
Cite this Article
Kabir R, Syed HZ, Hayhoe R, Parsa AD, Sivasubramanian M, Mohammadnezhad M, Sathian B, Rahana K, Jain M, Gandhi AP, Aaqib M, Varthya SB, Singh S, Dwivedi P, Meta-Analysis Using SPSS: A Simple Guide for Clinicians, Public Health, and Allied Health Specialists. The Evi. 2024:2(1):1-. DOI:10.61505/evidence.2024.2.1.25
Available From
https://the.evidencejournals.com/index.php/j/article/view/25

Received: 2024-01-10
Accepted: 2024-02-28
Published: 2024-05-28

Evidence in Context

• Showcases SPSS v29's meta-analysis tools tailored for health professionals. • Provides detailed instructions and examples for conducting various types of meta-analyses. • Includes practical examples for handling continuous, binary, and correlation data. • Outlines methods for evaluating publication bias and heterogeneity in meta-analyses. • Recommends enhancements for future SPSS versions to improve meta-analysis functionality.

To view Article

Meta-Analysis Using SPSS: A Simple Guide for Clinicians, Public Health, and Allied Health Specialists

Russell Kabir1, Haniya Zehra Syed2, Richard Hayhoe3, Ali Davod Parsa4, Madhini Sivasubramanian5, Masoud Mohammadnezhad6, Brijesh Sathian7, Kizhessery Rahana8, Manav Jain9, Aravind P Gandhi10, Muhammad Aaqib11*, Shoban Babu Varthya12, Surjit Singh13, Pradeep Dwivedi14,15

1 School of Allied Health, Anglia Ruskin University, Essex, United Kingdom.

2 School of Allied Health, Anglia Ruskin University, United Kingdom.

3 School of Allied Health, Anglia Ruskin University, United Kingdom.

4 School of Allied Health, Anglia Ruskin University, United Kingdom.

5 Department of Nursing and Public Health, University of Sunderland, London, United Kingdom.

6 School of Nursing and Healthcare Leadership, University of Bradford, United Kingdom.

7 Geriatric Medicine Department, Hamad Medical Corporation, Doha, Qatar.

8 Department of Public Health and Community Medicine, Central University of Kerala, Kasaragod, India.

9 Department of Pediatrics, University of Utah School of Medicine, Utah, United States.

10 Department of Community Medicine, All India Institute of Medical Sciences, Nagpur, India.

11 Department of Pharmacology, All India Institute of Medical Sciences, Jodhpur, India.

12 Department of Pharmacology, All India Institute of Medical Sciences, Jodhpur, India.

13 Department of Pharmacology, All India Institute of Medical Sciences, Jodhpur, India.

14 Centre of Excellence for Tribal Health, All India Institute of Medical Sciences, Jodhpur, Rajasthan, India. 15Centre of Excellence for Tribal Health, All India Institute of Medical Sciences, Jodhpur, India.

*Correspondence: aaqibshamim987@gmail.com

Abstract

Systematic review and meta-analysis and other forms of evidence syntheses are critical to informing guideline development and healthcare decision-making. Various software is available nowadays to conduct meta-analysis. Some of them are code-based, few are not freely available, and some others have obvious limitations. SPSS is the most commonly used statistical package and has a graphical user interface making it user-friendly. A recent version (v29) of SPSS has introduced the functionality for meta-analysis. This paper aims to provide a comprehensive and clear guide to public health, clinicians, and allied health professionals to perform and report a meta-analysis using SPSS.

We have first briefly explained a few key statistical concepts relevant to meta-analysis. Then, we have provided three solved examples for meta-analysis using the attached example datasets. We have also provided the interpretation and reporting for these three cases. Next, we have discussed about ancillary cases, and how meta-analysts can deal with other scenarios. Finally, we have provided the developers of SPSS with some suggestions for improvements and enhancement for incorporation in future versions of this software.

Keywords: systematic review, systematic review by SPSS, meta-regression, step-by-step guide, funnel plot, egger’s regression, harbord’s test, peter’s test, bubble plot

© 2024 The author(s) and Published by the Evidence Journals. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

Introduction

Systematic review and meta-analysis (SRMA) and other forms of evidence syntheses are critical to inform guideline development and healthcare decision-making [1]. Meta-analysis (MA) is a statistical analysis used for the quantitative synthesis of studies. Meta-analysis is generally performed when studies are similar enough for their results to be combined [2]. According to Haidich, ‘Meta-analysis is a quantitative, formal, epidemiological study design used to systematically assess the results of previous research to derive conclusions about that body of research [3]. In clinical practice and public health, there are two common goals of conducting MA. The first is to assess the effectiveness of specific interventions for a particular problem using a relatively small number of studies (less than 25). Secondly, it is to provide generalizations from a larger number of original studies (more than 100) that is not attainable from a single study [4]. MA combines quantitative data from (but is not limited to) randomized controlled trials, observational studies, diagnostic test accuracy studies and prevalence studies [5]. Single study findings are often insufficient to provide clear evidence, but the combination of multiple research results allows the researchers to draw more robust conclusions after a systematic and rigorous investigation of available evidence [5,6]. Moreover, when results differ across studies (as is often seen in real-world situations), a robust MA allows us to explore the reasons behind this variation. It helps us better understand the trends and determinants of the observed effect.

The basic unit measurement of MA is the effect size (ES). It is a metric to capture the direction and magnitude of the effect [7]. Generally, a simple study will provide a single ES whereas complex studies provide several ESs. ES is the dependent variable in the MA and other important attributes impacting the effect size are independent variables in the MA.

Systematic review and MA are commonly used terms. A systematic review can be conducted with or without MA depending on the availability of combinable quantitative data. Hansen et al [8] identified the following stages to conduct a MA – (i) Formulation of a research question, (ii) selection of appropriate studies through literature search, (iii) the ES of the selected studies should be decided, (iv) selection of analysis technique, (v) choice of software, (vi) coding the ESs, (vii) analysis, and (viii) reporting the findings [8]. These steps have been explained in further detail previously [9]. Here, we discussed the steps required to analyze the extracted data, generate plots, present the results, and report the findings.

Various software is available nowadays to conduct MA, mainly- RevMan, STATA, R, JAMOVI, JASP, Open Meta, Meta XL, DSTAT, Comprehensive meta-analysis (CMA), SAS, Python, and IBM SPSS. Though RevMan is quite popular, it has not limited to shortcomings not limited to inability to perform multiple subgroup analyses simultaneously, absence of prediction interval [10], and easy reproducibility of exported results in case of minor changes in the data. We have earlier discussed how to perform meta-analysis in R [11]. However, it is a code-based software, and not everyone is comfortable with the interface. Hence, we are discussing how to perform a meta-analysis in the most popular statistical software with a graphical user interface, i.e., SPSS.

This paper aims to provide a comprehensive and clear guide to public health, clinicians, and allied health professionals to conduct MA using the statistical software IBM SPSS Statistics V29. We have first discussed some key statistical concepts. Then, we have solved three examples of meta-analysis in SPSS in increasing order of complexity, to aid better comprehension. Then, we have provided ways to perform other meta-analyses and modify these analyses. Lastly, we have provided scopes for improvement in the current functionality for meta-analysis provided by SPSS.

 

 

 

 

Key Statistical Concept

Different type of Meta-Analysis

Table 1: Summary of different types of MA

 

Single Group MA

§ Proportional MA

§ Means MA

§ Correlation Co-efficient MA

§ Incidence rate MA

§ Proportional MA- this method focuses on summarizing the overall proportion or prevalence of a particular outcome in a single group across different studies [12].

§ Means MA- A mean difference, or summary estimate, can be produced in a meta-analysis when the mean difference values for a given outcome in a single group, derived from various RCTs, are all in the same unit [13].

§ Correlation Co-efficient MA- A correlation coefficient meta-analysis is a statistical technique used to combine data from multiple studies to estimate the strength and direction of the relationship between two continuous variables.

§ Incidence Rate MA- This technique combines the incidence rates from several studies [14].

Two Groups MA

§ For continuous data – Standardized Mean Difference (SMD) using Hedges’ g, Cohen’s D and Glacier’s Data

§ For binary data such as Yes or No – Odds Ratio (OR), Relative Risk (RR) and Hazard Ratio (HR)

§ SMD is used when multiple studies use the same outcome measure but in various ways. For, example, many studies measure emotional intelligence levels among college students but use different scales for measurement [13].

§ A method that combines binary data in two groups from several research studies is known as binary data MA. This is calculated using odds ratio (OR), relative risk or risk ratio (RR) and hazard ratio (HR).

Other types of MA

§ Multilevel MA

§ Network MA

§ In multi-level MA, this technique determines the overall effect and the effect sizes with a hierarchical structure. For example, to assess the impact of a new teaching technique on Statistics test scores in different universities of the Town [15].

§ Network MA- in this technique, three or more interventions are concurrently compared in a single analysis by adding both direct and indirect evidence across a network of studies [16].

Heterogeneity Measures

Evaluating heterogeneity in meta-analysis is a pivotal consideration, as the extent (or absence) of heterogeneity (variability between studies) can impact the choice of the statistical model applied [17]. The following techniques are used to assess the heterogeneity -
Tau squared:
Tau squared (τ²) measures the extent of heterogeneity among the effect sizes of individual studies included in the meta-analysis. It is often calculated as part of the random-effects meta-analysis model. The random-effects model assumes that the true effect size can vary across studies, and tau squared represents the estimated between-study variance in effect sizes [18].
H Squared:
H² is the ratio of the variance of the estimated overall effect size from a random-effects meta-analysis compared to the variance from a fixed-effects meta-analysis. A higher value of H² indicates greater heterogeneity, while a lower value indicates less heterogeneity [19].
I squared:
I2 statistic is a measure of heterogeneity that quantifies the proportion of variance in effect sizes across studies that is attributed to true differences in effects rather than sampling error. It is a percentage-based measure, ranging from 0% (least heterogeneity) to 100% (highest heterogeneity) [20]. However, the commonly used interpretation of I2 using thresholds to define the extent of heterogeneity is flawed [21]. It merely says what percentage of variance is due to sampling error.
Prediction Interval:
The 95% prediction interval (95% PI) is a more practical estimate of between-study heterogeneity in meta-analysis [22]. The 95% PI gives the range in which we expect the effect size from 95% of similar studies to fall. It should not be confused with the 95% confidence interval (95% CI). While the 95% CI of the pooled effect size is concerned with the precision of our estimate and is related to the point estimate, it does not comment on the variance of the effect size. The 95% PI gives information on this variance instead. Though it is not very commonly reported [23-25], it should nevertheless be used as a more preferred heterogeneity marker [21].


Models in meta-analysis
Two types of models are used in MA – (i) Fixed and (ii) Random effects.
1. Fixed Effect Model: In this fixed effect model, it is assumed that the true effect size of all studies is identical.
2. Random Effect Model: In this model, it is assumed that the true effect sizes of interventions or treatments vary across studies due to real differences between study populations, measurement methods, or other factors. Random-effects models are more appropriate when there is evidence of heterogeneity in a meta-analysis.


Standardized Mean Difference
When the same outcome is measured using different methods (like different scales or questionnaires for assessing depression severity), we cannot combine them or average them as it is. However, the aim of measurement is the same. Hence, standardized mean difference (SMD) is used here.
Cohen’s d: It is used to quantify the difference between two groups or conditions in terms of standard deviations. Cohen's d can be interpreted as the number of standard deviations in which the treatment group's mean outcome is different from the control group's [26]. Cohen's d is typically interpreted as follows:
• d = 0.2 indicates a small effect size
• d = 0.5 indicates a medium effect size
• d = 0.8 indicates a large effect size

Hedges’ g: Hedges' g is a standardized measure of effect size that is commonly used for continuous data. It is a modification of Cohen's d, which is another commonly used standardized effect size measure. The main difference between Hedges' g and Cohen's d is that Hedges' g adjusts for the sample size of each study [26].

Hedges' g can be interpreted as the number of standard deviations that the mean outcome for the treatment group is different from the mean outcome for the control group. A Hedges' g of 0.2 indicates a small effect size, a Hedges' g of 0.5 indicates a moderate effect size, and a Hedges' g of 0.8 indicates a large effect size.


Publication Bias


Publication bias occurs when the publication of research studies heavily relies on the nature and direction of the results [27]. A study is more likely to be published if the results are significant, and studies with non-significant results are less likely to be published. This discrepancy is especially pronounced for smaller studies. Larger studies might be published irrespective of the results, but the publication of smaller studies may depend on their results. Hence, the MA may fail to include all the relevant studies. Generally, a funnel plot is used to assess the publication bias in MA.


Example 1: Continuous outcome

We will discuss here how to meta-analyse continuous data from two groups. In this example, we look at randomized controlled trials comparing fasting blood sugar (FBS) in diabetes in the participants of the experimental arm with FBS in the participants of the control arm. We suspect that the effect on FBS can vary depending on two variables. It may differ depending on the presence of additional comorbidities, and the age of the participants.

Thus, we need to extract these data: study label (Study), mean value of FBS in the experimental group (mean.e), standard deviation of FBS in the experimental group (sd.e), number of participants in the experimental group (n.e), mean value of FBS in the control group (mean.c), standard deviation of FBS in the control group (sd.c), number of participants in the control group (n.c), coded subgroup data (studies with and without diabetics with additional comorbidities) (subgroup), and the average age of the study participants (age) [Table 2].

Table 2: Continuous outcome data

Study

mean.e

sd.e

n.e

mean.c

sd.c

n.c

subgroup

age

Study A

120

9

110

134

10

112

1

51

Study B

130

12

240

134

15

241

2

58

Study C

118

5

78

130

8

80

1

49

Study D

128

15

188

134

20

191

2

48

Study E

120

6

202

136

7

190

1

35

Study F

114

8

286

126

4

280

1

46

Study G

124

14

174

121

11

172

2

55

Study H

110

7

186

130

8

190

1

36

Study I

122

4

160

143

9

158

1

39

Study J

122

13

182

123

15

180

2

60

Study K

120

13

160

123

13

156

2

43

Study L

130

8

230

136

7

228

1

62

Step 1: Import data into SPSS using the following command

In the menu bar, click File > Import Data > Excel [Figure 1]. Now, you can choose the datasheet with the extracted data. Alternatively, you can use this table [Table 2] by remaking it.

Figure 1: Importing Excel data into SPSS

Step 2: Conducting Meta-analysis

• Once the datasheet is open, check the menu bar, and click Analyze > Meta-Analysis > Continuous outcomes > Raw data [Figure 2]
• Move variable ‘mean.e’ to Mean, ‘sd.e’ to Standard Deviation and ‘n.e’ to Study under Treatment Group [Figure 3]
• Move variable ‘mean.c’ to Mean, ‘sd.c’ to Standard Deviation and ‘n.c’ to

Study under Control Group
• Move ‘Study’ to Study Label
• Select ‘Unstandardized Mean Difference’ under Effect Size
• Select ‘Random-effects’ under Model
• Click Bias > Select ‘Egger’s regression-based test’
• Click Trim-and-Fill > Select ‘Estimate number of missing studies’
• Click Print > select ‘Prediction interval under random-effects model’ under Effect Sizes
• Click Save > select ‘Individual effect size’ and ‘Standard error’
• Click Plot > click ‘Forest Plot’ tab > select ‘Forest Plot’ > under ‘Display Columns’, select ‘Effect Size’, ‘Confidence interval limits’, and ‘Weight’ > move the variables ‘mean.e’, ‘sd.e’, ‘n.e’, ‘mean.c’, ‘sd.c’, and ‘n.c’ under ‘Additional Column(s)’ > under 'Reference Lines’, select ‘Overall effect size’ and ‘Null effect size’ > under Annotations, select ‘Homogeneity’ and ‘Heterogeneity’ [Figure 4]
• Within Plot > click ‘Funnel plot’ tab > select ‘Funnel plot’ > under ‘Y-axis Values’, choose ‘Inverse standard error’ > select ‘Include imputed studies with trim-and-fill’ > move ‘Study’ to Label [Figure 5]
• Once all the settings have been chosen properly, press ‘OK’

Figure 2: Selecting Meta-analysis

Figure 3: Entering variables and selecting options

Figure 4: Customising the forest plot

Figure 5: Customising the funnel plot

Interpreting and reporting the results

After pressing ‘OK’, a lot of output appears in a new SPSS window. To simplify the extensive output, we will look at a few of the key figures and tables. We will first interpret the required figures and refer to the relevant tables as needed. The main output is a forest plot [Figure 6]. Each row in the forest plot corresponds to a single study and shows the effect size from that study on the right. The center of the blue square represents the pooled estimate whereas the ends of the black line indicate the confidence limits. The size of the blue square represents the weight of each study in the meta-analysis (in other words, it indicates the importance or representation

of each study for the calculation of the overall pooled or average effect). The black/grey vertical line represents the line of no effect (zero for mean difference, and one for risk ratio or odds ratio). The red dotted vertical line represents the pooled effect.

The last row corresponds to the pooled (or average) estimate. Here, the effect size is represented by a dark red diamond (instead of the blue square for individual studies). The centre of the diamond represents the average pooled effect size, whereas the ends of the black line represent the upper and lower 95% confidence limits of the pooled effect size.

Here, the forest plot shows that the mean difference in fasting blood sugar (mg/dl) between the experimental arm and the control arm is -9.40 [95% CI: -13.73 to -5.07] [Figure 6]. The between-study heterogeneity is indicated at the bottom of the forest plot. It reports a tau2 of 57.23, H2 of 69.23, I2 of 0.99 (or 99%), and a statistically significant Q-statistic [Q = 620.48, p < 0.01]. However, these metrics of heterogeneity are not easily interpretable and do not paint the complete picture as discussed earlier. Hence, we use 95% prediction intervals (95% PI). Table 3 assesses heterogeneity by providing a 95% PI of -26.96 to 8.16. In 95% of the cases, we expect the effect size to fall within this range. We can explore or attempt to answer this heterogeneity using meta-regression which is discussed in the next subsection.

Figure 6: Forest plot for continuous outcomes

Table 3: Effect size estimate for continuous outcome data

Effect Size Estimates

Effect Size

Std. Error

Z

Sig. (2-tailed)

95% Confidence Interval

95% Prediction Intervala

Lower

Upper

Lower

Upper

Overall

-9.401

2.2091

-4.255

<.001

-13.730

-5.071

-26.960

8.159

a. Based on t-distribution.

Next, we inspect the funnel plot to assess publication bias [Figure 7]. This funnel plot may initially look like a typical example of publication bias. Smaller studies (i.e., studies with higher standard error or lower inverse standard error) are asymmetrically arranged and are clustered mainly on one side of the central black vertical line (the pooled effect size). The funnel plot asymmetry is further confirmed by Egger’s regression [Table 4]. A p-value < 0.1 in Egger’s regression is considered consistent with funnel plot asymmetry. However, there is a finer point to be noted. All the five studies lying in the lower right quadrant of the funnel plot belong to the second subgroup while the other seven studies belong to the first subgroup. Hence, the funnel plot asymmetry may be explained by between-study heterogeneity leading to clustering of study estimates. We cannot attribute this funnel plot asymmetry to publication bias.

Figure 7: Funnel plot for continuous outcomes

Table 4: Egger’s regression for continuous outcome data

Egger's Regression-Based Testa

Parameter

Coefficient

Std. Error

t

Sig. (2-tailed)

95% Confidence Interval

Lower

Upper

(Intercept)

-22.435

5.4051

-4.151

.002

-34.478

-10.392

SEb

11.966

4.6898

2.552

.029

1.517

22.416

a. Random-effects meta-regression

b. Standard error of effect size

Step 3: Conducting Meta-regression

• In the menu bar, click Analyze > Meta-Analysis > Meta-Regression
• Move ‘Predicted Value of Effect Size [ES]’ to Effect Size [Figure 8]
• Move ‘Estimated Standard Error of Predicted Value of Effect Size [seES]’ to Standard Error
• Move ‘subgroup’ to Factor(s)
• Move ‘age’ to Covariate(s)
• Select ‘Random-effects’ under Model
• Click Print > select ‘Display exponentiated statistics’
• Click Plot > select ‘age’ > move ‘Study’ to Label
• Once all the settings have been chosen properly, press ‘OK’

Figure 8: Setting up meta-regression

Interpreting and reporting the results

Meta-regression helps us explore heterogeneity. We can assess whether other variables also contribute to the effect size. We initially suspected that additional comorbidities (a categorical variable) and age (a continuous variable) may influence fasting blood sugar. Hence, we tested them with meta-regression. We see here that the first subgroup (compared to the second subgroup) significantly influence the estimate (-9.24. -13.87 to -4.61, p < 0.01) [Table 5]. Similarly, age also significantly moderates the estimate (.40, .15 to .66, p < 0.01) [Table 5].


Table 5: Meta-regression for continuous outcome data

Parameter Estimates

Parameter

Estimate

Std. Error

t

Sig. (2-tailed)

95% Confidence Interval

Lower

Upper

(Intercept)

-23.408

6.0930

-3.842

.004

-37.192

-9.625

subgroup = 1

-9.241

2.0465

-4.516

.001

-13.871

-4.612

subgroup = 2

0a

.

.

.

.

.

Age

.402

.1117

3.600

.006

.149

.655

a. This parameter is set to zero because it is redundant.

The moderating effect of continuous variables (like age) can also be depicted graphically using a bubble plot [Figure 9]. The Y-axis shows the effect size (dependent variable) and the X-axis shows age (independent variable). Each circle represents the effect size and the average age of participants in a particular study. The size of the circle represents the weight of a study. Thus, the change in fasting blood sugar is greater with greater age. Since most values and the pooled estimate are negative, we can infer that the difference in FBS between the groups becomes smaller with age.

Figure 9: Bubble plot for continuous outcomes

Example 2: Binary Outcome

In this example, we will show how to conduct MA of binary outcomes – risk ratio (RR) using the sample data. Continuing from the previous example, we look at randomized controlled trials comparing number of participants experiencing adverse events in the experimental arm with the number of participants experiencing adverse events in the control arm. This is a binary outcome as patients either experience an adverse event (i.e., the event occurred) or do not experience an adverse event (i.e., the event did not occur). We suspect that this can also vary depending on two variables as discussed earlier - the presence of additional comorbidities, and the average age of participants.

We extract similar data as earlier except for the outcome measurement. Instead of mean, standard deviation, and number of participants in each group (as in continuous outcome), we extract the number of participants with the event of interest (adverse event here) and the number of participants without the event. Thus, we extract these data: study label (Study), number of participants with the event in the experimental group (event.e), number of participants without the event in the experimental group (no_event.e), number of participants with the event in the control group (event.c), number of participants without the event in the control group (no_event.c), coded subgroup data (studies with and without diabetics with additional comorbidities) (subgroup), and the average age of the study participants (age) [Table 6].


Table 6: Binary outcome data

Study

event.e

no_event.e

event.c

no_event.c

subgroup

age

Study A

35

75

14

98

1

51

Study B

25

215

12

229

2

58

Study C

23

55

11

69

1

49

Study D

18

170

25

166

2

48

Study E

16

186

20

170

1

35

Study F

60

226

21

259

1

46

Study G

19

155

29

143

2

55

Study H

11

175

13

177

1

36

Study I

10

150

12

146

1

39

Study J

15

167

10

170

2

60

Study K

14

146

25

131

2

43

Study L

53

177

17

211

1

62

Step 1: Import data into SPSS using the following command

One can import data as shown earlier, either copying this table [Table 6].

Step 2: Conducting Meta-analysis

• Once the datasheet is open, check the menu bar, and click Analyze > Meta Analysis > Binary outcomes > Raw data
• Move variable ‘event.e’ to Success, and ‘no_event.e’ to Failure under Treatment Group
• Move variable ‘event.c’ to Success, and ‘no_event.c’ to Failure under Control Group
• Move ‘Study’ to Study Label
• Select ‘Log Risk Ratio’ under Effect Size
• Select ‘Random-effects’ under Model
• Click Bias > Select Harbord’s test
• Click Trim-and-Fill > Select ‘Estimate number of missing studies’
• Click Print > select ‘Prediction interval under random-effects model’ and ‘Display exponentiated statistics’ under Effect Sizes
• Click Save > select ‘Individual effect size’ and ‘Standard error’
• Click Plot > click ‘Forest Plot’ tab > select ‘Forest Plot’ > under ‘Display Columns’, select ‘Effect Size’, ‘Confidence interval limits’, ‘Weight’, and ‘Display exponentiated form’ > move the variables ‘event.e’, ‘no_event.e’, ‘event.c’, and ‘no_event.c’ under ‘Additional Column(s)’ > under 'Reference Lines’, select ‘Overall effect size’ and ‘Null effect size’ > under Annotations, select ‘Homogeneity’ and ‘Heterogeneity’
• Within Plot > click ‘Funnel plot’ tab > select ‘Funnel plot’ > under ‘Y-axis Values’, choose ‘Inverse standard error’ > select ‘Include imputed studies with trim-and-fill’ > move ‘Study’ to Label
• Once all the settings have been chosen properly, press ‘OK’.
Ensure that you have opted for the exponentiated form/statistics (not required for continuous data).

Interpreting and reporting the results

The forest plot shows that the relative risk (or risk ratio) of participants in the experimental arm experiencing an adverse event compared to those in the control arm is 1.31 [95% CI: 0.90 to 1.90] [Figure 10]. Though the estimate shows a higher risk, it is not statistically significant as the confidence limits cross the threshold of no effect (one here).


Figure 10: Forest plot for binary outcomes

To assess heterogeneity, we check the 95% PI in Tables 7-8. The two tables remind us of an important statistical consideration. In mean difference (MD), calculation of the pooled MD (or average estimate) involves the MD provided from individual studies. However, for binary outcomes like risk ratio (RR) and odds ratio (OR), the effect size first undergoes logarithmic transformation. Then, the averaging takes place (based on weight). After that, the average or pooled estimate is back-transformed (inverse logarithm or exponentiation) and represented as risk ratio. Thus, even though the forest plot shows only risk ratios and not logarithm-transformed values, a transformation and back-transformation take place in the calculations behind the scenes. For easier clinical interpretation, we will consider the 95% PI given in the exponentiated Table 8, and not Table 7 where back-transformation has not been carried out yet.

Table 7: (Untransformed) Effect size estimate for binary outcome data

Effect Size Estimates

Effect

Size

Std.

Error

Z

Sig.

(2-tailed)

95% Confidence Interval

95% Prediction Intervala

Lower

Upper

Lower

Upper

Overall

.266

.1920

1.387

.165

-.110

.643

-1.099

1.632

a. Based on t-distribution.

Table 8: (Exponentiated) Effect size estimate for binary outcome data

Effect Size Estimates

Exp. Effect Size

Exp. 95% Confidence Interval

Exp. 95% Prediction Interval

Lower

Upper

Lower

Upper

Overall

1.305

.896

1.902

.333

5.112

The funnel plot looks symmetrical with studies distributed similarly on both sides of the pooled estimate (the central black vertical line) [Figure 11]. This is confirmed by a non-significant Harbord’s regression [Table 9]. So, we do not suspect publication bias in this case.

Figure 11: Funnel plot for binary outcomes

Table 9: Harbord’s regression for binary outcome data

Harbord's Regression-Based Testa

Parameter

Coefficient

Std. Error

t

Sig. (2-tailed)

95% Confidence Interval

Lower

Upper

(Intercept)

1.623

.8181

1.984

.075

-.200

3.446

INV_SEb

-4.586

2.6866

-1.707

.119

-10.572

1.400

a. Random-effects meta-regression

b. Inverse score standard error

Step 3: Conducting Meta-regression
• In the menu bar, click Analyze > Meta-Analysis > Meta-Regression
• Move ‘Predicted Value of Effect Size [ES]’ to Effect Size
• Move ‘Estimated Standard Error of Predicted Value of Effect Size [seES]’ to Standard Error
• .

Interpreting and reporting the results

We see that the first subgroup (compared to the second subgroup) significantly influences the estimate (-0.99, -0.46 to -1.53, p < 0.01) [Table 10]. Similarly, age also significantly moderates the estimate (.40, .15 to .66, p < 0.01) [Table 10]. However, we need to remember that these are transformed effect sizes (in the logarithmic scale), and should not be interpreted as raw RR. The bubble plot shows that the relative risk increases with increasing age [Figure 12].

evi-02-01-25-12.jpg

Figure 12: Funnel-plot-for-binary-outcomes

Table 10: Meta-regression for binary outcome data

Parameter Estimates

Parameter

Estimate

Std. Error

t

Sig. (2-tailed)

95% Confidence Interval

Lower

Upper

(Intercept)

-3.072

.7438

-4.130

.003

-4.755

-1.389

subgroup = 1

.994

.2368

4.198

.002

.458

1.530

subgroup = 2

0a

.

.

.

.

.

age

.057

.0138

4.100

.003

.025

.088

a. This parameter is set to zero because it is redundant.

Example 3: Correlation

In this example, we will show how to conduct MA of correlation. Continuing from the previous examples, we look at studies reporting correlation between fasting blood sugar and glycosylated haemoglobin (HbA1c). Again, we suspect that this can vary depending on two variables as discussed earlier - the presence of additional comorbidities, and the average age of participants.

The data extraction for outcome measurement slightly differs here. We only have one group (not two groups), and need just the correlation coefficient and the number of participants in each study. Though this data is sufficient for the statistical analysis, SPSS does not natively support MA of correlations. Hence, we have to find a workaround to achieve this (unlike the meta::metacor() function in R).

The correlation coefficient (r) and number of participants (n) is used to derive a transformed effect size (z), and the standard error for this transformed effect size (SE_z). This transformed effect size is actually Fisher’s z. This is said to stabilise the variance of correlation coefficients and makes them approximately normally distributed, essential for statistical analyses assuming normality[28]. To make it convenient, we have already pre-populated the formula for z and SE_z in the data file attached at github..... Thus, if one inputs r and n, then z and SE_z is auto calculated and

presented. In case this file is not accessible, these formulas can be used to compute z and SE_z respectively.

• z: =FISHER(B2)
• SE_z: =1/SQRT(C2-3)
• Here, the B2 cell refers to r, and the C2 cell refers to n. This can be edited as required.

We load this revised file to SPSS. Thus, the uploaded sheet has these data: study label (Study), correlation coefficient (r), number of participants (n), coded subgroup data (studies with and without diabetics with additional comorbidities) (subgroup), average age of the study participants (age), transformed effect size (z), and standard error for this transformed effect size (SE_z) [Table 11].

Table 11: Correlation Data

Study

cor

n

subgroup

age

z

SE_z

Study A

0.92

222

1

51

1.58902692

0.06757374

Study B

0.83

481

2

58

1.1881364

0.04573894

Study C

0.91

158

1

49

1.52752443

0.08032193

Study D

0.93

379

2

48

1.65839002

0.05157106

Study E

0.84

392

1

35

1.22117352

0.05070201

Study F

0.91

566

1

46

1.52752443

0.04214498

Study G

0.96

346

2

55

1.94591015

0.05399492

Study H

0.91

376

1

36

1.52752443

0.05177804

Study I

0.9

318

1

39

1.47221949

0.05634362

Study J

0.85

362

2

60

1.25615281

0.05277798

Study K

0.83

316

2

43

1.1881364

0.05652334

Study L

0.88

458

1

62

1.37576766

0.04688072

Step 1: Import data into SPSS using the following command

One can import data as shown earlier, either copying this table [Table 11].

Step 2: Conducting Meta-analysis

• Once the datasheet is open, check the menu bar, and click Analyze > Meta Analysis > Continuous outcomes > Pre-Calculated Effect Size
• Move variable ‘z’ to Effect Size, ‘SE_z’ to Standard Error, and ‘Study’ to Study Label
• Select ‘Random-effects’ under Model
• Click Bias > Select ‘Egger’s regression-based test’
• Click Trim-and-Fill > Select ‘Estimate number of missing studies’
• Click Print > select ‘Prediction interval under random-effects model’ under Effect Sizes
• Click Plot > click ‘Forest Plot’ tab > select ‘Forest Plot’ > under ‘Display Columns’, select ‘Effect Size’, ‘Confidence interval limits’, and ‘Weight’ > move the variables ‘r’, and ‘n’ under ‘Additional Column(s)’ > under 'Reference Lines’, select ‘Overall effect size’ and ‘Null effect size’ > under Annotations, select ‘Homogeneity’ and ‘Heterogeneity’
• Within Plot > click ‘Funnel plot’ tab > select ‘Funnel plot’ > under ‘Y-axis Values’, choose ‘Inverse standard error’ > select ‘Include imputed studies with trim-and-fill’ > move ‘Study’ to Label
• Once all the settings have been chosen properly, press ‘OK’.

Interpreting and reporting the results

This is not as straightforward and brings new challenges. The results are based

on Fisher’s z transformation. We need to back transform them to correlation coefficients to better interpret them. The forest plot shows a transformed effect size of 1.46 (95% CI: 1.33 to 1.58, 95% PI: 0.94 to 1.97) for correlation between FBS and HbA1c in the given participants [Figure 13, Table 12]. You may notice that the value goes beyond the range of -1 to +1 which doesn’t happen with rae correlations. Hence, this is difficult to interpret and apply clinically.

We have added the formula in the results tab of the data file at github…. . Alternatively, you can use this formula in MS Excel to calculate the back transformed correlation coefficients.
• =FISHERINV(B3)

Here, B3 can be the transformed coefficient, confidence limit, or prediction limit and the result of the above formula gives the back transformed equivalent of that number. In this case, it shows that the correlation between FBS and HbA1c in the given participants is 0.90 (95% CI: 0.87 to 0.92, 95% PI: 0.74 to 0.96). This shows a strong positive correlation. It can be easily interpreted and thereby applied in real-life situations, be it in the clinics or in a public health scenario.

evi-02-01-25-13.jpg
Figure 13: Forest plot for correlation

Table 12: Effect size estimate for correlation

Effect Size Estimates

Effect Size

Std. Error

Z

Sig. (2-tailed)

95% Confidence Interval

95% Prediction Intervala

Lower

Upper

Lower

Upper

Overall

1.455

.0654

22.262

<.001

1.327

1.584

.945

1.966

a. Based on t-distribution.

The funnel plot looks symmetrical with studies distributed more-or-less similarly on both sides of the pooled estimate (the central black vertical line) [Figure 14]. This is confirmed by a non-significant Egger’s regression [Table 13]. So, we do not suspect publication bias in this case.

evi-02-01-25-14.jpg

Figure 14: Funnel plot for correlation

Table 13: Egger’s regression for correlation

Egger's Regression-Based Testa

Parameter

Coefficient

Std. Error

t

Sig. (2-tailed)

95% Confidence Interval

Lower

Upper

(Intercept)

1.203

.3822

3.147

.010

.351

2.055

SEb

4.634

6.9071

.671

.518

-10.757

20.024

a. Random-effects meta-regression

b. Standard error of effect size

Step 3: Conducting Meta-regression

• In the menu bar, click Analyze > Meta Analysis > Meta Regression
• Move ‘Predicted Value of Effect Size [ES]’ to Effect Size
• Move ‘Estimated Standard Error of Predicted Value of Effect Size [seES]’ to Standard Error
• Move ‘subgroup’ to Factor(s)
• Move ‘age’ to Covariate(s)
• Select ‘Random-effects’ under Model
• Click Print > select ‘Display exponentiated statistics’
• Click Plot > select ‘age’ > move ‘Study’ to Label
• Once all the settings have been chosen properly, press ‘OK’.

Interpreting and reporting the results

We note that neither subgroup nor age significantly moderates the estimate [Table 14]. The bubble plot also shows that the effect doesn’t change with varying ages [Figure 15].


Table 14: Meta-regression for correlation

Parameter Estimates

Parameter

Estimate

Std. Error

t

Sig. (2-tailed)

95% Confidence Interval

Lower

Upper

(Intercept)

1.363

.4899

2.783

.021

.255

2.472

subgroup = 1

.027

.1608

.166

.872

-.337

.390

subgroup = 2

0a

.

.

.

.

.

age

.002

.0090

.175

.865

-.019

.022

a. This parameter is set to zero because it is redundant.

evi-02-01-25-15.jpg

Figure 15: Bubble plot for correlation

Ancillary cases

Standardised mean difference
Usually, a continuous outcome is measured using the same scale in all the studies. This is common for laboratory outcomes. However, there is often lack of uniformity with questionnaires or scales. Depression severity, pain, and quality of life is often measured with different scales or tools in different studies. In such cases, when the same outcome is being measured, standardised mean difference is often used for continuous outcomes. In SPSS, this can be opted for by selecting Cohen’s d, Hedge’s g, or Glass’s Delta, in the analysis pane [Figure 3].

Odds ratio
The choice between risk ratio (RR) and odds ratio (OR) is a long-standing debate [29]. While both are used for prospective studies, only OR is used for cross-sectional and retrospective study. We practised RR in the previous example. For OR, we must opt for log odds ratio in the meta-analysis pane.

Subgroup analyses
Authors sometimes want to present the subgroup analysis in the forest plot itself. In such cases, readers need to click on analysis within the meta-analysis pane. Here, they can move the subgroup variable (in the list of variables on the left) to the field for subgroup analysis (on the right). Then, once the output is generated, all the plots – forest plot, funnel plot, bubble plot etc. – are generated considering both the overall analysis and the subgroup analyses.

Transformation and back transformation


Though it is not needed to write the results but to get a better idea of the concepts, one can perform inverse natural logarithm (or exponentiation) of the values in Table 7, and check if the results match Table 8.


Future directions

Heterogeneity exploration in forest plots
Forest plots often summarise the evidence for a single outcome. Estimates from all individual studies, the final pooled estimate, heterogeneity estimates, and subgroup effects can all be demonstrated in a forest plot. Rather, an astute eye can even screen for publication bias in the forest plot itself. Though SPSS provides a lot of the required features, there are few suggestions that can increase the comprehensiveness of the forest plots. Prediction interval is an indispensable tool for assessing heterogeneity [10]. This can be incorporated into the forest plot itself.

Moreover, subgroup analyses can be depicted in the forest plot as discussed earlier. However, I2, tau2, and other heterogeneity estimates are not visible for each subgroup. Adding these can help assess heterogeneity within each subgroup. Thirdly, the total number of participants either in each group or in the whole analysis is important. This is required for rating the quality of evidence, or our confidence in the pooled effect [30].

We have attached the forest plots for subgroup analysis constructed using SPSS [Figure 16A] and using the meta package [31] within R [Figure 16B]. We have highlighted the portions in Figure 16B that depict those features that we hope can be incorporated in the future in the meta-analysis module of SPSS.

evi-02-01-25-16-a.jpg
Figure 16A: Forest plot with subgroup analysis in SPSS

evi-02-01-25-16-b.jpg
Figure 16B: Forest plot with subgroup analysis using meta package in R

Harmonisation with other MA software
The most common method for data extraction of binary outcomes involves extracting the number of events, and the total number in each group. In our example, this would mean extracting the number of participants with adverse events and the total number of participants in each group. This is how meta-analysis is commonly performed with other software like R, RevMan, and STATA. However, it is slightly different here as it needs the number of participants with adverse events and the number of participants without adverse events in each group. Thus, it needs the number of participants without adverse events instead of the total number of participants. Though this is clearly understood when visiting the interface, this is not usual practice. Hence, researchers who are habituated to other software and are using SPSS now but are in a hurry may input the total number of participants instead of several failures. This can unintentionally give the wrong output. So, SPSS can harmonize the fields. Alternatively, they can retain this extra feature. And they can keep an option for giving either of them, like they give options for choosing one of standard deviation, variance, or weight.

Sensitivity analysis
It is important to assess whether the pooled estimate remains robust under varying circumstances. This aids us in assessing the generalisability of the results. Leave-one-out meta-analysis omits each study one by one and assesses if the pooled effect varies.

Alternative methods for publication bias assessment
Publication bias assessment is very important for interpreting the results of an evidence synthesis. SPSS has taken a good step in this direction by not restricting to Egger’s regression. It allows us to use the recommended Harbord’s test and Peter’s test for binomial outcomes [32]. Another step in this direction would be the incorporation of Doi plot and LFK index. When we assess publication bias for single-group studies (not comparative) or for meta-analysis with 5-9 studies, funnel plot and Egger’s regression are not validated [33]. However, Doi plot and LFK index have shown acceptable results in these cases too [34]. So, they are key to meta-analysis of proportions or other such cases [23-25]. Hence, they have been implemented in MetaXL, R, and STATA. So, this would be a good addition in that direction.

Bayesian meta-analysis
When the number of studies is less, a Bayesian framework for the meta-analysis is often preferred as it helps to better estimate the uncertainty [35]. Moreover, it allows us to use prior knowledge, and to calculate exact probabilities of the pooled estimate being greater or smaller than a prespecified threshold [36].  


Supporting information

None

Ethical Considerations

None

Acknowledgments

None

Funding


This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
Author contribution statement

Russell Kabir: Conceptualization, Software, Formal analysis, Writing - Original Draft. Haniya Zehra Syed: Conceptualization, Software, Formal analysis, Writing - Original Draft. Richard Hayhoe: Conceptualization, Software, Formal analysis, Writing - Original Draft. Ali Davod Parsa: Conceptualization, Software, Formal analysis, Writing - Original Draft. Madhini Sivasubramanian: Conceptualization, Software, Formal analysis, Writing - Original Draft. Masoud Mohammadnezhad: Conceptualization, Software, Formal analysis, Writing - Original Draft. Brijesh Sathian: Conceptualization, Software, Formal analysis, Writing - Original Draft. Kizhessery Rahna: Software, Validation, Writing - Original Draft, Visualization. Manav Jain: Validation, Writing - Review & Editing. Aravind P Gandhi: Conceptualization, Validation, Writing - Review & Editing. Muhammad Aaqib Shamim: Conceptualization, Software, Formal analysis, Writing - Original Draft, Visualization, Data Curation. Shoban Babu Varthya: Conceptualization, Validation, Writing - Review & Editing. Surjit Singh: Conceptualization, Validation, Writing - Review & Editing. Pradeep Dwivedi: Conceptualization, Validation, Writing - Review & Editing

Data availability statement


Data included in article/supp. material/referenced in article.

Additional information


No additional information is available for this paper.

Declaration of competing interest


The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

References

1. Sharp MK, Baki DABA, Quigley J, Tyner B, Devane D, Mahtani KR, et al. The effectiveness and acceptability of evidence synthesis summary formats for clinical guideline development groups: a mixed-methods systematic review. Implement Sci. 2022;17(1):74 [Crossref][PubMed][Google Scholar]

2. Kabir R, Hayhoe R, Bai ACM, Vinnakota D, Sivasubramanian M, Afework S, et al. The systematic literature review process: a simple guide for public health and allied health students. Int J Res Med Sci. 2023;11(9):3498-506 [Crossref][PubMed][Google Scholar]

3. Haidich AB. Meta-analysis in medical research. Hippokratia. 2010;14(Suppl 1):29-37 [Crossref][PubMed][Google Scholar]

4. Gurevitch J, Koricheva J, Nakagawa S, Stewart G. Meta-analysis and the science of research synthesis. Nature. 2018;555(7695):175-82 [Crossref][PubMed][Google Scholar]

5. Tatsioni A, Ioannidis JPA. Meta-analysis. In: Quah SR, editor. International Encyclopedia of Public Health (Second Edition). Second Edition ed. Oxford: Academic Press; 2017. p. 117-24 [Crossref][PubMed][Google Scholar]

6. Ding W, Li J, Ma H, Wu Y, He H. Science Mapping of Meta-Analysis in Agricultural Science. Information. 2023;14(11) [Crossref][PubMed][Google Scholar]

7. Allison DB, Gorman BS. Calculating effect sizes for meta-analysis: The case of the single case. Behav Res Ther. 1993;31(6):621-31 [Crossref][PubMed][Google Scholar]

8. Hansen C, Steinmetz H, Block J. How to conduct a meta-analysis in eight steps: a practical guide. Manag Rev Q. 2022;72(1):1-19 [Crossref][PubMed][Google Scholar]

9. Gandhi AP, Shamim MA, Padhi BK. Steps in undertaking meta-analysis and addressing heterogeneity in meta-analysis. The Evidence. 2023;1(01) [Crossref][PubMed][Google Scholar]

10. IntHout J, Ioannidis JPA, Rovers MM, Goeman JJ. Plea for routinely presenting prediction intervals in meta-analysis. BMJ open. 2016;6(7):e010247-e [Crossref][PubMed][Google Scholar]

11. Shamim MA, Gandhi AP, Dwivedi P, Padhi BK. How to perform meta-analysis in R: A simple yet comprehensive guide. The Evidence. 2023;1(01) [Crossref][PubMed][Google Scholar]

12. Barker TH, Migliavaca CB, Stein C, Colpani V, Falavigna M, Aromataris E, et al. Conducting proportional meta-analysis in different types of systematic reviews: a guide for synthesisers of evidence. BMC Med Res Methodol. 2021;1-9 [Crossref][PubMed][Google Scholar]

13. Andrade C. Mean Difference, Standardized Mean Difference (SMD), and Their Use in Meta-Analysis: As Simple as It Gets. J Clin Psychiatry. 2020;81(5) [Crossref][PubMed][Google Scholar]

14. Sahai H, Khurshid A. Statistics in Epidemiology: Methods, Techniques and Applications: CRC Press 1995. . [Crossref][PubMed][Google Scholar]

15. STATA Meta-Analysis Reference Manual Release 18. Report No. : 1597183903. [Crossref][PubMed][Google Scholar]

16. Chaimani A, Caldwell DM, Li T, Higgins JPT, Salanti G. Undertaking network meta-analyses. Cochrane Handbook for Systematic Reviews of Interventions: John Wiley & Sons, Ltd; 2019. p. 285-320 [Crossref][PubMed][Google Scholar]

17. Huedo-Medina TB, Sánchez-Meca J, Marín-Martínez F, Botella J. Assessing heterogeneity in meta-analysis: Q statistic or I 2 Index? Psychol Methods. 2006;11(2):193-206. [Crossref][PubMed][Google Scholar]

18. Borenstein M. Avoiding common mistakes in meta‐analysis: Understanding the distinct roles of Q, I‐squared, tau‐squared, and the prediction interval in reporting heterogeneity. Res Synth Methods. 2023 [Crossref][PubMed][Google Scholar]

19. Lin L, Chu H, Hodges JS. Alternative measures of between-study heterogeneity in meta-analysis: Reducing the impact of outlying studies. Biometrics. 2017;73(1):156-66 [Crossref][PubMed][Google Scholar]

20. Higgins JPT, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327(7414):557- [Crossref][PubMed][Google Scholar]

21. Borenstein M, Higgins JPT, Hedges LV, Rothstein HR. Basics of meta-analysis: I2 is not an absolute measure of heterogeneity. Res Synth Methods. 2017;8(1):5-18 [Crossref][PubMed][Google Scholar]

22. Borenstein M. How to understand and report heterogeneity in a meta-analysis: The difference between I-squared and prediction intervals. Integr Med Res. 2023 [Crossref][PubMed][Google Scholar]

23. Shamim MA. Real-life implications of prevalence meta-analyses? Doi plots and prediction intervals are the answer. Lancet Microbe. 2023;4(7):e490 [Crossref][PubMed][Google Scholar]

24. Anil A, Shamim MA, Saravanan A, Sandeep M. HPV DNA and p16(INK4a) positivity in vulvar cancer and vulvar intraepithelial neoplasia. Lancet Oncol. 2023;24(6):e235 [Crossref][PubMed][Google Scholar]

25. Shamim MA, Dwivedi P, Padhi BK. Beyond the funnel plot: The advantages of Doi plots and prediction intervals in meta-analyses. Asian J Psychiatr. 2023;84:103550 [Crossref][PubMed][Google Scholar]

26. Brydges CR. Effect Size Guidelines, Sample Size Calculations, and Statistical Power in Gerontology. Innov Aging. 2019;3(4):igz036-igz [Crossref][PubMed][Google Scholar]

27. Sedgwick P. What is publication bias in a meta-analysis? BMJ. 2015;351. [Crossref][PubMed][Google Scholar]

28. Silver NC, Dunlap WP. Averaging correlation coefficients: Should Fisher's z transformation be used? J Appl Psychol. 1987;72(1):146-8. [Crossref][PubMed][Google Scholar]

29. Cummings P. The relative merits of risk ratios and odds ratios. Arch Pediatr Adolesc Med. 2009;163(5):438-45 [Crossref][PubMed][Google Scholar]

30. Guyatt GH, Oxman AD, Kunz R, Vist GE, Falck-Ytter Y, Schunemann HJ, et al. What is "quality of evidence" and why is it important to clinicians? BMJ. 2008;336(7651):995-8. [Crossref][PubMed][Google Scholar]

31. Balduzzi S, Rücker G, Schwarzer G. How to perform a meta-analysis with R: a practical tutorial. Evid Based Ment Health. 2019;22(4):153-60 [Crossref][PubMed][Google Scholar]

32. Sterne JAC, Sutton AJ, Ioannidis JPA, Terrin N, Jones DR, Lau J, et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ. 2011;343(jul22 1):d4002-d [Crossref][PubMed][Google Scholar]

33. The Cochrane C. Recommendations on testing for funnel plot asymmetry. In: Higgins JPT, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions. 5.1.0 ed. 2023 [Crossref][PubMed][Google Scholar]

34. Furuya-Kanamori L, Barendregt JJ, Doi SAR. A new improved graphical and quantitative method for detecting bias in meta-analysis. International Journal of Evidence-based Healthcare. 2018;16(4):195-203 [Crossref][PubMed][Google Scholar]

35. McNeish D. On Using Bayesian Methods to Address Small Sample Problems. Structural Equation Modeling: A Multidisciplinary Journal. 2016;23(5):750-73 [Crossref][PubMed][Google Scholar]

36. Williams DR, Rast P, Bürkner P-C. Bayesian Meta-Analysis with Weakly Informative Prior Distributions. PsyArXiv. 2010 [Crossref][PubMed][Google Scholar]

Disclaimer / Publisher’s Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of Journals and/or the editor(s). Journals and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.