Do you have a breakthrough idea that reviewers refuse to appreciate?   Have you tried to challenge the reviewers?  I will love to hear how, but I am inviting you to take a challenge for the cause of those who have big ideas that receive no eyes, ears, or voice.   

Today, big news on the internet is about a new study published in the Proceedings of the Geologists’ Association, reporting shocking discovery of 5.7 million year old early human footprints in a Greek island[1]. The discovery is shocking because legacy narrative taught in universities that humans originated in Africa and then migrated to rest of the world is derived from the 3.7 million year old early human footprints in Tanzania and 4.4 million year old ape-like footprints in Ethiopia. Previously, no human-like fossil had ever been found outside Africa older than 1.8 million years old.

Earlier this year, in another shocking discovery, another group of researchers reinterpreted the fragmentary 7.2 million year old fossil from Greece and Bulgaria as of early humans. The only thing that is making some researchers take notice is the fact that the authors of the new study are well-respected in the field, and one of them accidentally discovered the fossil while vacationing on the Greek island in 2002. As a notable scholar notes, “This discovery challenges the established narrative of early human evolution head-on and is likely to generate a lot of debate. Whether the human origins research community will accept fossil footprints as conclusive evidence of the presence of [early humans] in [Greek island] remains to be seen.”

Acceptance from research community of major scientific discoveries is a challenge. In 2016, New Republic published an article “Science Is Suffering Because of Peer Review’s Big Problems”. The contributor, Stefano Balieti[2], lamented three well-known facts about scientific publishing.

1. Big discoveries get rejected
In 2015, three of the top medical journals were found to reject 14 out 14 of the top-cited articles of all time in their discipline. Even in social sciences, George A. Akerlof’s seminal paper, “The Market for Lemons,” which introduced the concept of “asymmetric information” (how decisions are influenced by one party having more information), was rejected by several leading journals before it was published. Akerlov was later awarded the Nobel Prize for this and other later work.

2. Something entirely original and innovative is challenging to review

Assessing the quality of a scientific work is a hard task, even for trained scientists; then how can a trained expert review something entirely original and innovative. In such cases, behavioral psychology informs us that the reviewers and editors of high-profile journals will seek to avoid the potential risk and reject the paper.

3. Competition intensifies  rejection of new
Academic competition for funding, positions, publication space and credits is intense, and with globalization, the number of academic researchers has risen exponentially.  Balieti and his colleague conducted an experimental study to investigate three fundamental aspects of competition: 1) Does competition promote or reduce innovation? 2) Does competition reduce or improve the fairness of the reviews? 3) Does competition improve or hamper the ability of reviewers to identify valuable contributions? They found that while competition resulted in more creative, and original works being created, it did not improve the average quality of works being published. The reason — in competitive scenario, the more original works were more likely to be rejected by the reviewers, while in non-competitive scenario, the less original works were more likely to be rejected by the reviewers. Balieti notes, “We found that a consistent number of reviewers, aware of this competition, purposely downgraded the review score of the competitor to gain a personal advantage.” He adds, 
“That is why some Nobel Prize winners no longer hesitate to publish their results in low-impact journals.”

Other new research in an entirely different but broadly analogous context has similar findings.. Using a database of stakeholder suggestions, Piezunka and Dahlander (2015)[3] find that when reviewers have access to the entire crowd of creators, they tend to receive more innovative creations. Yet, they are also more likely to intensify selection of creations that fit existing paradigm. In other words, crowding of creations narrows attention of the reviewers to less distant works. If the contributor, or the supporters of the contributor, has few prior ties with the core reviewer club, then it is even less likely that the original creation of that contributor will receive reviewer attention.

In 1997, I developed a dynamic mathematical model using first principles of thermodynamics to measure one unit of technological growth, as part of my Ph.D. dissertation at the Wharton School. While my dissertation committee found it difficult to review this original work, it decided unanimously in favor of accepting the dissertation and conferring the Ph.D. degree. I was fortunate to have developed a trust-based relationship with the members of my committee. Unfortunately, though, attempts at the time to publish my dissertation work failed.

After my Ph.D., I was invited to be a core member of the GLOBE research project, at the Wharton School, led by Professor Robert House. The GLOBE project sought to validate the basic premise of Hofstede’s established cross-cultural paradigm – the value-belief theory, that cultural values are shaped by geographical factors and are comparatively stable over history, and that cultural values have a perfect correlation with cultural practices. In GLOBE project, we found varying relationship between cultural values and cultural practices – some were negatively related, some had no relation, and others were positively related. And, we found that some cultural values as GLOBE measured them actually had negative relation with Hofstede’s measure. It was rather challenging to publish these findings in refereed journals. Only after the GLOBE book was published did the reviewers became more open to research using GLOBE paradigm, although many used a more critical lens than that applied to studies using Hofstede’s framework and dimension scores – even though GLOBE put a big question mark on the validity of the core tenet of Hofstede’s value-belief theory.

After a period of 20 years since my dissertation, I have chosen to make a fresh attempt at showing the great promise of that work. Over the recent years, new working papers and published research has begun emerging that supports the principles guiding the dynamic modeling in my dissertation. Thus, it has become feasible to support the scientific reasoning with specific citations that the reviewers look for, and to have the possibility that the reviewers who are aware of the emerging empirical research in leading management journals will find the findings at least worth giving attention to.

My co-author, Yi Zhang of Zayed University, did an outstanding survey of Chinese small-scale enterprises in 2015, and one of the questions was about their environmental management practices. She developed her survey based on a range of well-established constructs and scale in the new strategy literature. This has made it feasible to do an empirical test of new hypotheses related to the firm’s environmental performance management, based on the intuition, reasoning, and dynamic set of mathematical equations advanced in my Ph.D. dissertation.

We have two working papers that offer complementary perspectives on major issues in the field of management sciences and applied economics, such as market failure and externalities. My co-author Yi sent the first one to a respectable B-category journal, and received a very prompt response from the editor earlier this week:

In general, an article must deliver an original contribution to overall theory or policy by:
• Developing a new concept or theory and spelling-out its derivation and implications; or
• Using a systematic analysis or a map of a literature in order to offer a new critical view; or
• Using a literature review to derive a fully research-able question and then use an empirical analysis to test a possible explanatory model that aims to deal with it.

What you have written is very interesting – at least as far as the statistics are concerned – but we need rather more than that.
1. The submission needs more in the introduction concerning the background and context of the national culture, economy and its management, in depth.
2. You need to put in more on the theoretical front as well.
3. After you have discussed the statistical findings, you need to develop much more discussion and evaluation in qualitative terms, say a couple of pages.
4. We then expect a section on implications for theory and practice.
5. More on limitations and further research is also required.
6. Finally, we need a fuller set of conclusions.

Next steps: we would advise you to take your time, maybe resubmitting in three months time but not before.”

The editor’s response prompted me to draft the second working paper titled “An Ontological Derivation of the Dynamic Conditionality for Environmental Performance Management” over the past week.  In this, a set of twenty algebraic equations have been derived using first principles, to characterize the system of dynamic relationships and forces in a firm’s environmental performance management. This includes ontological mathematical definitions of fundamental terms such as green capability, green investment, green trading, green servicing, green exchange, green programming, green planning, green learning, green development, and green cost. The paper explains how these definitions and their dynamic relationships are relevant for managerial practice and future research. 

I am confident that our findings represent a primeval scientific discovery primeval implying principles from the earliest ages of human life; scientific implying principles that are measurable, quantifiable and verifiable; and discovery implying principles that are present in the nature – similar to gravitational waves.

As my challenge, therefore, I invite scholars to read the second paper on dynamic conditionality (it builds on the insights from the first paper Dynamic Theory), and to prove any of the following:

a) Any of the twenty mathematical ontologies is not valid

b) Any of the twenty mathematical ontologies is not original (i.e. have been reported by somebody else before)

c) Any of the twenty mathematical ontologies does not have a significant managerial, research, or policy relevance.

To download the first paper:
A Dynamic Theory of Environmental Performance Management and Exploratory Evidence in Chinese Context

To download the second paper: 
An Ontological Derivation of the Dynamic Conditionality for Environmental Performance Management


I promise that if any scholar is able to prove any of the above, then I will donate $5000 from my personal savings for any environment or social cause of that scholar’s preference.   

But if you are unable to prove, then I ask that you invite another scholar who you believe might meet this challenge. And, do let me know by commenting on this blog, or if you prefer, by emailing me at gupta05@gmail.com. 

Are you up for my challenge? 

                                *********************************************

N.B. Just as a nudge, as soon as one hundred professors from the Ph,D. granting universities send me a message affirming their participation in the challenge, I promise to donate $1000 to a social or environmental cause chosen by one of them (using random selection). 
                                *********************************************

[1]
https://www.sciencedaily.com/releases/2017/08/170831134221.htm

[2]https://newrepublic.com/article/135921/science-suffering-peer-reviews-big-problems
[3]Piezunka, H., & L. Dahlander. 2015. “Distant Search, Narrow Attention: How Crowding Alters Organizations’ Filtering of Suggestions in Crowdsourcing” Academy of Management Journal. 58 (3), 856-880


Categories: Blog / Published On: September 2nd, 2017 /