U.S ammonia prices stabilize to $1500 per ton

Natural gas prices have stabilized to just over $4/MBTU since August 2021, sending anhydrous ammonia prices to over $1400. As of the 13th of January, anhydrous prices in the U.S have reached $1486/ton, up from $1434 from December. With this price, autogenous ammonia production using advanced high temperature microporous insulated reactors feeding from photovoltaic facilities could yield payback times of less than two years. The innovative capacity of human civilization will be tested in this new environment of artificially elevated nitrogen fertilizer prices.

Fullscreen capture 1152022 103650 AM.bmp


Powerplant: Horizon PEMFC 50% ef 5 kw/kg, 3 hp/lb

Gross weight: 10,500

Empty weight @55%: 5,775

Drag: 800 lbs

Power: 520 hp

Hydrogen consumption: 23.3 kg/hr

Propeller diameter: 8.75 ft

Number of propellers: 2

Power loading: 5.5 hp/ft2

Power-thrust ratio: 5.15 @sealevel, 1.54 @FL280

Cruise speed: 320 MPH

Range: 4700 miles

Endurance: 14.68 hours

LH2 tankage drag thrust penalty: 20 kg LH2

LH2 fuel weight: 342 kg

Tankage @35%: 119 kg

Fuel weight at 2000 miles: 154 kg

Volume: 170 cubic feet

Block fuel weight: 1017

Payload: 4725 lbs

Net Payload 4700 mi: 3700

Net payload 2000 mi: 4285

Equivalent jet fuel weight with PT6-A67B: 5390 lbs

Cargo cost for 7000 miles: $0.85/kg @ $2/kg LH2

Conventional air freight pre-Covid average Hong Kong North America: $3.5/kg

Cost advantage: 4x

UAV manufacturing cost: $1,500,000

Airframe life based on 757: 100,000 hours

Hourly airframe cost: $15

Galton reaction time paradox solved.

Christophe Pochari, Pochari Technologies, Bodega Bay, CA

Abstract: The paradox of slowing reaction time has not been fully resolved. Since Galton collected 17,000 samples of simple auditory and visual reaction time from 1887 to 1893, achieving an average of a 185 milliseconds, modern researchers have been unable to achieve such fast results, leading some intelligence researchers to erroneously argue that slowing has been mediated by selective mechanisms favoring lower g in modern populations.

Introduction: In this study, we have developed a high fidelity measurement system for ascertaining human reaction time with the principle aim of eliminating the preponderance of measurement latency. In order to accomplish this, we designed a high-speed photographic apparatus where a camera records the stimuli along with the participant’s finger movement. The camera is an industrial machine vision camera designed to stringent commercial standards (Contrastec Mars 640-815UM $310 Alibaba.com), the camera feeds into a USB 3.0 connection to a windows 10 PC using Halcon machine vision software, the camera records at a high frame rate of 815 frames per second, or 1.2 milliseconds per frame, the camera uses a commercial-grade Python 300 sensor. The high-speed camera begins recording, then the stimuli source is activated, the camera continues filming after the participant has depressed a mechanical lever. The footage is then analyzed using a framerate analyzer software such as Virtualdub 1.10, by carefully analyzing each frame, the point of stimuli appearance is set as point zero, where the elapsed time of reaction commences. When the LED monitor begins refreshing the screen to display the stimuli color, which is green in this case, the framerate analyzer tool is used to identity the point where the screen has refreshed at approximately 50 to 70% through, this point is set as the beginning of the measurement as we estimate the human eye can detect the presence of the green stimuli prior to being fully displayed. Once the frame analyzer ascertains the point of stimuli arrival, the next process is enumerating the point where finger displacement is conspicuously discernable, that is when the liver begins to show evidence of motion from its point in stasis prior to displacement.
Using this innovative technique, we achieved a true reaction time to visual stimuli of 152 milliseconds, 33 milliseconds faster than Francis Galton’s pendulum chronograph. We collected a total of 300 samples to arrive at a long-term average. Using the same test participant, we compared a standard PC measurement system using Inquisit 6, we achieved results of 240 and 230 milliseconds depending on whether a laptop keyboard or desktop keyboard is used. This difference of 10 ms is likely due to the longer key stroke distance on the desktop keyboard. We also used the famous online test humanbenchmark.com and achieved an average of 235 ms. Using the two tests, an internet and local software version, the total latency appears to be up to 83 ms, nearly 40% of the gross figure. These findings strongly suggest that modern methods of testing human reaction time impose a large latency penalty which skews results upwards, hence the fact it appears reaction times are slowing. We conclude that rather than physiological changes, slowing simple RT is imputable to poor measurement fidelity intrinsic to computer/digital measurement techniques.
In compendium, it cannot be stated with any degree of confidence that modern Western populations have experienced slowing reaction time since Galton’s original experiments. This means attempts to extrapolate losses in general cognitive ability from putative slowing reaction times is seriously flawed and based on confounding variables. The reaction time paradox is not a paradox but rather based on conflating latency with slowing, a rather elementary problem that continued to perplex experts in the field of mental chronometry. We urge mental chronometry researchers to abandon measurement procedures fraught with latency such as PC-based systems and use high-speed machine vision cameras as a superior substitute.



Anhydrous ammonia reaches nearly $900/ton in October

Fullscreen capture 10102021 54638 PM.bmp

Record natural gas prices have sent ammonia skyrocketing to nearly $900 per ton for the North American market. Natural gas has reached $5.6/1000cf, driving ammonia to 2014 prices. Pochari distributed photovoltaic production technology will now become ever more competitive featuring even shorter payback periods.

A battery electric cargo submarine: a techno-economic assessment

A battery electric cargo submarine: a techno-economic assessment

*Christophe Pochari, Bodega Bay, CA

*Pochari Technologies

Abstract: An electric submarine powered by lithium-ion batteries with a cruising speed of under 8 knots is proposed for ultra-low cost emission free shipping. This concept, if developed according to these design criteria, would be able to lower the cost of shipping compared to conventional container ships. We believe this concept could revolutionize shipping. The relatively small size of the submarine allows it to avoid large ports, reducing congestion, and increase turnaround time, allowing the shipper to delivery directly to virtually any calm shoreline.

Due to the absence of resistance generated by the stern and bow waves, a submarine can sometimes be a more efficient marine vehicle than a conventional vessel, especially in rough oceans, as there exists a substantial parasitic load engendered by waves crashing in the incoming direction. Additionally, submarines are safer due to the immunity from storms, swells and rogue waves. Furthermore, current velocity diminishes rapidly with depth, so if currents are in the incoming direction, there is an additionally reduced parasitic load on the vessel. The risk of cargo losses are also minimized, as container ships routinely lose valuable cargo at sea.


The resistance at 6.2 knots is estimated to be 0.3 lbf/ft2 of wetted area, at 8.4 knots, it’s estimated to be 0.5 lbf/ft2. This means a submarine capable of carrying 950 cbm of cargo only needs a paltry 90 hp at 6.2 knots. The propulsive efficiency of large low vessel ship propellers is in the order of 25-30 lbf/shp. With 28 lbf/hp being a realistic estimate for this size submarine.

Henrik Carlberg at NTNU (Norway) estimated that a commercial submarine designed for oil and gas applications would use 165 kW at 6.2 knots with a wetted area of 1920 m2.

The submarine design featured here has the same wetted area, with net hull volume minus ballast tanks and battery volume of 2300 cbm.

This 2300 cbm cargo submarine would take 1000 hours to traverse 7000 miles and consume 215,000 kwh along the way, costing $6450 at $0.03/kWh, with a 3000 cycle Li-Ion cycle life OPEX of $3.4/cbm at $110/kW battery prices (2170 Panasonic), or $7800 per trip. The total cost per cbm would be $12.4/cbm ($930/40 ft container equivalent) including an empty trip back. Excluding the round trip, assuming goods are transported the other direction, the cost per 40 ft container equivalent would be $465, far below the pre-Covid price of $1500 (Freightos Baltic index) for existing bunker fuel powered mega-container ships. For such a small vessel, using non-hydrocarbon fuel, it is remarkable the cost is close to massive highly optimized container ships.

Construction costs for the submarine have been estimated at $5,000,000, with the steel structural materials costing $1,100,000 at a price of $800/ton. With a lifetime of 40 years, hourly CAPEX and depreciation is minimal.

The submarine would be unmanned, saving on crew costs, and would require no AIP as the use of a manned crew and air-breathing combustion propulsion is eliminated


Maximum displacement: 3,890,000 kg

Structure weight: 700,000 kg

Cargo volume: 2300 cbm (186 kg/m3 cargo density avg)

Wetted area: 1990 m2

Ballast volume: 530 m3: 543,000 kg

Battery at 200 wh/kg and 500 wh/liter (rectangular): 268,000 kWh (80% depletion): 1,340,000 kg: 536 m3

Total loaded weight: 2,850,000

Front and rear weights: 700,000 kg steel plates

Motor power: 270 hp

Length: 72.9 m

Diameter: 9.2 m

The Images below are sourced from Henrik Carlberg’s thesis on a commercial submarine for oil and gas. Skin friction estimates were corroborated with Martin Renilson’s estimate of 59,000 newtons for a wetted area of 1400 m2 at a speed of 9.7 knots. To adjust for our lower speed, a ratio of approximately 2.2 is found from the CFD analysis of Moonesun et al and Putra et al from 6 to 9 knots. Propulsive efficiency is calculated using the formula below.

Fullscreen capture 1032021 31434 AM

Fullscreen capture 1032021 60748 PM.bmpFullscreen capture 1032021 95153 PM.bmp


Click to access IJMS%2042%288%29%201049-1056.pdf


The limits of mental chronometry: little decline in g can be inferred from Galtonian datasets

The limits of mental chronometry: g has not declined 15 points since Victorian era

Abstract: Using high speed photography with industrial machine vision cameras, Pochari Technologies has acquired ultra-high fidelity data on simple visual reaction time, which appears the first study of its kind. The vast preponderance of reaction time studies make use of computer software based digital measurements systems that are fraught with response lag. For illustration, Inquisit 6, a Windows PC software, is frequently used in psychological assessment settings. We used Inquisit 6 and performed 10 sample runs, with a running average of 242 ms using a standard keyboard and 232 ms with a laptop keyboard. The computer used is an HP laptop with 64 GB of DDR4 ram and a 4.0 GHz Intel processor. Using the machine vision camera, a mean speed of 151 milliseconds was achieved with a standard deviation of 16 ms. Depending on when one decides to begin the cutoff from finger movement and screen refresh, there is a standard interpretation lability of around 10 ms. Based on this high fidelity photographic analysis, our data leads to the conclusion that a latency of around 90 ms is built in with digital computer-based reaction time measurement. Each individual frame was calculated using Virtualdub 1.10.4 frame analysis software which allows the user to manipulate high frame rate video footage. This data would indicate modern reaction times showing 240-250 milliseconds (Deary etc) cannot be compared to Galton’s original measurement of around 185 ms. Although Galton’s device was no doubt far more accurate than today’s digital systems, it still possessed some intrinsic latency, we estimate Galton’s device had around 30 ms of latency based on this analysis assuming 240 as the modern mean. Dodonova et al constructed a pendulum-like chronometer very similar to Galton’s original device, they received a reaction time of 172 ms with this device.

After adjusting for latency, we come to the conclusion there has been minimal change in reaction time since 1889. We plan on using a higher speed camera to further reduce measurement error in a follow up study, although it is not necessary to attain such high degrees of precision since a total latency of +-3 milliseconds out of 150 represents a minuscule 2% standard error, there is much more room for error in defining the starting and ending point.

An interesting side note note: There is some data pointing to ultra-fast reaction time in athletes that seems to exceed the speed of normal simple reaction to visual stimuli under non-stressful conditions:

Studies have measured people blinking as early as 30-40 ms after a loud acoustic stimulus, and the jaw can react even faster. The legs take longer to react, as they’re farther away from the brain and may have a longer electromechanical delay due to their larger size. A sprinter (male) had an average leg reaction time of 73 ms (fastest was 58 ms), and an average arm reaction time of 51 ms (fastest was 40 ms)”.

The device used in the study is a Shenzhen Kayeton Technology Co KYT-U400-CSM high speed USB 3.0 330fps @ 640 x 360 MJPEG camera. A single frame increment represents an elapsed time of 3 milliseconds. Pochari Technologies has purchased a Mars 640-815UM at 815 frames per second manufactured by Hangzhou Contrastech Co., Ltd, the purpose of the 815 fps camera is to further reduce latency down to 1.2 milliseconds. In the second study using a different participant we will use the 815 fps device.

To measure finger movement, we used a small metal lever. The camera is fast enough to detect the color transition of the LED monitor, note the color changing from red to green. We set point zero as the point where the color shift is around 50% through. It moves from top down. The participant is instructed to hold his/her finger as steady as as possible during the waiting period, there is effectively zero detectable movement until the muscle contraction takes place upon nerve signal arrival, which takes place at around 100 m/s, at a distance of 1.6 m (16 ms time) from the brain to the hand.

Fullscreen capture 1092021 112033 PM

Fullscreen capture 1092021 112044 PM


Mars 640-815UM 3.0 USB machine vision camera


Shenzhen Kayeton Technology Co KYT-U400-CSM high speed USB camera.

Introduction and motivation of the study

In 2005, Bruce Charlton came up with a novel idea for psychometric research: attempt to find historical reaction time data to estimate intelligence in past generations. In 2008 he wrote an email to Ian Deary proposing this new method to perform a diachronic analysis of intelligence. Ian Deary unfortunately did not have any information to provide Charlton with, so the project was put into abeyance until 2011 when Michael Woodley discovered Irwin Silverman’s 2010 paper which had rediscovered Galton’s reaction time collection. The sheer obscurity of Galton’s original study is evident considering the leading reaction time expert, that is Ian Deary, was not even aware of it. The original paper covering Galton’s study was from Johnson et al 1985. The subsequent paper: “were the Victorians clever than us” generated much publicity. One of the lead authors of the paper, Jan te Nijenhuis gave an interview with a Huffington post journalist on Youtube discussing this theory, it was also featured in the Dailymail. The notoriously dyspeptic Greg Cochran threw the gauntlet down on Charlton’s claim in his blog, arguing according to the breeder’s equation that such a decline is impossible. Many HBD bloggers, including HBD chick were initially very skeptical, prominent blogger Scott Alexander Siskind also gave a rebuttal mainly along the lines of sample representation and measurement veracity, the two main arguments made here.

Galton’s original sample has been criticized for not being representative of the population at the time as it mainly consisted of students and professionals visiting a science museum in London where the testing took place. At the time in 1889, most of the Victorian population was comprised of laborers and servants, who would have likely not attended this museum to begin with. Notwithstanding the lack of population representation, the sample was large, over 17,000 total measurements were taken at the South Kensington Museum from 1887 to 1893. Since Galton died in 1911 and never published his reaction time findings, we are reliant on subsequent reanalysis of the data, this is precisely where error may have accrued as Galton may have had personal insight into the workings of the measurement and data aggregation system he used which has not been completely documented. The data used by Silverman was provided by reanalysis of Galton’s original findings published by Koga and Morant (1923), and later more data was uncovered by Johnson 1985. Galton used a mechanical pendulum chronometer which is renowned for its accuracy and minimal latency. Measurement error is not where criticism is due, Galton’s tool was likely more accurate than modern methods on computer testing. Modern computers are thought to possess around 35-40 ms not including any software or internet latencies.

The issue with inferring g decline from Galton-to the present RT data is threefold:

The population is very unlikely to have been completely representative of the British population as aforementioned. It consisted of disproportionate numbers of high g individuals since at the time people who participate in events like this would have drawn overwhelmingly from a higher class strata. Society was far more class segregate and average and low g groups would have not participated in intellectual activities including visiting museums to pay to have their reaction time tested!

Scott Alexander comments: “This site tells me that about 3% of Victorians were “professionals” of one sort or another. But about 16% of Galton’s non-student visitors identified as that group. These students themselves (Galton calls them “students and scholars”, I don’t know what the distinction is) made up 44% of the sample – because the data was limited to those 16+, I believe these were mostly college students – aka once again the top few percent of society. Unskilled laborers, who made up 75% of Victorian society, made up less than four percent of Galton’s sample”

The second issue is measurement latency, when adjusting Galton’s original estimate, and correcting modern samples for digital latency, the loss in reaction collapses from the originally claimed 70 ms (14 IQ points) to a mere 20 milliseconds. Another factor mentioned by Dordonova et al is the process of “outlier cleaning”, where samples below 200 ms and above 750 ms are eliminated, this can have a strong effect on the mean, theoretically in any direction, although it appears that outlier cleaning increases the RT mean since slow outliers are rarer than fast outliers. 

The third issue is that reaction time studies only 50-60 years after (1940s and 50s) show reaction times equal to modern samples, which indicates the declines must have taken place in a short timeframe of only 50-60 years. A large study from Forbes 1945 shows 286 ms for males in the UK. A study from Michael Persinger’s book on ELF waves shows a study from 1953 in Germany.

“On the occasion of the German 1953 Traffic Exhibition in Munich, the reaction times of visitors were measured on the exhibition grounds on a continuous basis. The reaction time measurements of the visitors to the exhibition consisted of the time span taken by each subject to release a key upon the presentation of a light stimulus”.

In the 1953 Germany study, they were comparing the reaction of people exposed to different levels of electromagnetic radiation. The mean appeared to be in the 240-260 ms range.

Lastly, it could have been the case that Galton instead chose the fasted of three samples, not the mean of the sum of the samples.

Dordonova et all says “It is also noteworthy that Cattell, in his seminal 1890 paper on measurement, on which Galton commented and that Cattell hoped would “meet his (Galton’s) approval” (p. 373), also stated: In measuring the reaction-time, I suggest that three valid reactions be taken, and the minimum recorded” (p. 376). The latter point in Cattell’s description is the most important one. In fact, what we know almost for sure is that it is very unlikely that Galton computed mean RT on these three trials (For example, Pearson (1914) claimed that Galton never used the mean in any of his analyses. The most plausible conclusion in the case of RT measurement is that Galton followed the same strategy as suggested by Cattell and recorded the best attempt, which would be well in line with other test procedures employed in Galton’s laboratory.

Woods 2015 et al confirms this statement: “based on Galton’s notebooks, Dordonova and Dordonov (2013) argued that Galton recorded the shortest-latency SRT obtained out of three independent trials per subject. Assuming a trial-to-trial SRT variance of 50 ms (see Table 1), Galton’s reported single-trial SRT latencies would be 35–43 ms below the mean SRT latencies predicted for the same subjects; i.e., the mean SRT latencies observed in Experiment 1 would be slightly less than the mean SRT latencies predicted for Galton’s subjects”

A website called humanbenchmark.com run by Ben D Wiklund has gathered 81 million clicks. Such a large sample size eliminates almost all sampling bias. The only issue would be population differences, it’s not known what percent are from Western nations. Assuming most are in Western nations, it’s safe to say this massive collection is far more accurate than a small sample performed by a psychologist. In order for this test to be compared to Galton’s original sample, since the test is online, both internet latency and hardware latency have to be accounted for. Internet latency depends on the distance between the user and the server, so an average is impossible to estimate. Humanbenchmark is hosted in North Bergen, US, so if half the users are outside the U.S, the distance should average at around 3000 km. 

“Connecting to a web site across 1500 miles (2400 km) of distance is going to add at least 25 ms to the latency. Normally, it’s more like 75 after the data zig-zags around a bit and goes through numerous routers”. Unless the website corrects for latency, which seems difficult to believe, since they would have to immediately calculate the distance based on the user’s IP and assume he does not use a VPN, if internet latency can range as high as 75 miliseconds, it is doubtful that the modern average reaction time is 167 ms, therefor we are forced to conclude there must be some form of a latency correction system, although they make no mention of such a feature. For example, since Humanbenchmark is hosted in New Jersey, a person taking the test in California would require to wait 47 ms before his signal reaches New Jersey is 4500 kilometers away, but this includes only the actual in takes for light to travel a straight line at the speed of light, many fiber-optic cables take a circuitous path which adds distance, additionally, there is also latency in the server itself and the modem and router. According to Verizon, the latency for Transatlantic NY London (3500 km) is 92 ms, adjusting for the distance between New Jersey and California (4500 km) gives 92 ms. Since the online test begins to record the time elapsed after the green screen is initiated, the computer program in New Jersey started calculating immediately after green is sent, but 92 ms passes before you see green, and when green appears, you click, which then takes another 92 ms before it arrives at the server to end the timer. The internet is not a “virtual world”, all webservices are hosted by a server computer which performs computation locally, by definition, any click on a website hosted in Australia 10,000 km away will register lag 113 ms after your click, this is limited by the speed of light. Only a quantum entangled based internet could be latency free, but at the expense of destroying the information according to the uncertainty principle! Assuming the estimate provided by Verizon, assuming the average test taker is within 3000 km, we can use an estimate of 70 ms for latency. Since the latency is doubled (calculation time begins immediately signal is sent to user), then a 140 ms is simply too much to substract, there must be automatic correction, which now makes estimating the true latency more difficult since many users use VPNs which create a false positive up or down. To be conservative, we use a gross single latency of 20 ms. Upon further analysis, using a VPN with an IP in New York just a short distance from the server, the latency adjustment program (if it exists!) would add little correction value as the latency would be less than a few milliseconds. The results show no change in the reaction time upon changing the location, indicating no such mechanism exists which was our first thought. If no such latency correction exists, than modern reaction times could theoretically be as low as 140 ms. (note, this is close to the real number, so our blind estimate was pretty good!). The latency of LED computer monitors varies widely. For example, the LG 32ML600M, a medium-end LED monitor has an input lag of 20 ms, this monitor was chosen randomly and is assumed to reasonably representative of most monitors used by the 81 million users of the online test as well as being the one used in the later study. Using the software program HTML/JavaScript mouse input performance tests we measure a latency of 17 ms for a standard computer mouse. The total latency (including internet at 20 ms) is 56 ms. From the human benchmark dataset, the median reaction time was 274 milliseconds, yielding a net reaction time of 218 milliseconds, 10 miliseconds slower than Galton’s adjusted numbers provided by Woods et al. Bruce Charlton has created a conversion system where 1 IQ point is equal to 3 ms. This assumes a modern reaction time of 250 ms with a standard deviation of 47 ms. This simple but elegant method for turning reaction time is purely linear, it assumes no changes in the correlations at different levels of IQ. With this assumption, 10 ms equates to 3.3 IQ points, unwontedly similar to Piffer’s estimate.

“The mean SRT latencies of 231 ms obtained in the current study were substantially shorter than those reported in most previous computerized SRT studies (Table 1). When corrected for the hardware delays associated with the video display and mouse response (17.8 ms), “true” SRTs in Experiment 1 ranged from 200 ms in the youngest subject group to 222 ms in the oldest, i.e., 15–30 ms above the SRT latencies reported by Galton for subjects of similar age (Johnson et al., 1985). However, based on Galton’s notebooks, Dordonova and Dordonov (2013) argued that Galton recorded the shortest-latency SRT obtained out of three independent trials per subject. Assuming a trial-to-trial SRT variance of 50 ms (see Table 1), Galton’s reported single-trial SRT latencies would be 35–43 ms below the mean SRT latencies predicted for the same subjects; i.e., the mean SRT latencies observed in Experiment 1 would be slightly less than the mean SRT latencies predicted for Galton’s subjects. Therefore, in contrast to the suggestions of Woodley et al. (2013), we found no evidence of slowed processing speed in contemporary populations.

They go on to say: “When measured with high-precision computer hardware and software, SRTs were obtained with short latencies (ca. 235 ms) that were similar across two large subject populations. When corrected for hardware and software delays, SRT latencies in young subjects were similar to those estimated from Galton’s historical studies, and provided no evidence of slowed processing speed in modern populations.”.

What the authors are saying is that correcting for device lag, there’s no appreciable difference in simple RT between Galton’s sample and modern ones. Dordonova and Dordonov claimed that Galton did not use means in computing his samples. Dordonova et al constructed a pendulum similar to Galton’s to ascertain its accuracy, they concluded it would have been a highly accurate device devoid of the latencies that plague modern digital systems. “What is obvious from this illustration is that RTs obtained by the computer are by a few tens of milliseconds longer than those obtained by the pendulum-based apparatus”.

They go on to say: “it is very unlikely that Galton’s apparatus suffered from a problem of such a delay. Galton’s system was entirely mechanical in nature, which means that arranging a simple system of levers could help to make a response key very short in its descent distance”.


There are two interpretations available to us. The first is that no decline whatsoever took place. If reaction time is to be used as a sole proxy for g, then it appears according to Dodonova and Woods, who provide a compelling argument, which I confirmed using data from mass online testing, that no statistically significant increase in RT has transpired. 

Considering the extensive literature that shows negative fertility patterns on g, it seems implausible that some decline has not occurred. It appears that this decline is rather so subtle as to not be picked up by RT, the “signal is weak” in an environment of high noise. In an interview with intelligence blogger “Pumpkin person”, Davide Piffer argues that based on his extensive computation of polygenic data, g has fallen 3 points per century:

“I computed the decline based on the paper by Abdellaoui on British [Education Attainment] PGS and social stratification and it’s about 0.3 points per decade, so about 3 points over a century.

It’s not necessarily the case that IQ PGS declined more than the EA PGS..if anything, the latter was declining more because dysgenics on IQ is mainly via education so I think 3 points per century is a solid estimate”

Since Galton’s 1889 study, Western populations may have lost 3.9 points. What’s fascinating about this number is how close it is the IQ of East Asians, who average 104-105. East Asia industrialized only very recently, with China only having industrialized in the 1980s, the window for dysgenics to operate has been very narrow. Japan has been industrialized for longer, at the turn of the century, so selection pressures would likely assuaged earlier, which presents a Paradox since Japan’s IQ appears very close if not higher than China and South Korea. Of course this is only rough inference, these populations are somewhat genetically different, albeit minor differences, but still somewhat different as far as psychometric differences are concerned. Southern China has greater Australasian/Malay admixture which reduces its average compared to Northern China. For all intents and purposes, East Asia (Mongoloid) IQ has remained remarkably steady at 105, indicating an “apogee” of g in pre-industrial populations. Using indirect markers of g, Mongoloids have larger brains, slow life history speeds, and fast visual processing speeds than whites, corresponding to an ecology of harsh climate (colder winter temperatures than Europe, Nyborg 2003). If any population reached a climax of intelligence, it would have likely been North East Asians. Did Europe feature unique selective pressures?

Unlikely, if one uses a model of “Clarkian selection” of downward mobility, Unz documented a similar process in NEA. Additionally, plagues, climatic disruptions, and mini ice ages afflicted equally if not in greater frequency in NEA than in Europe. It’s plausible to argue group selection in NEA would have been markedly weaker since inter-group conflict was less frequent. China has historically been geographically unified, with major wars between groups being rare compared to Europe’s geographic disunity and practically constant inter-group conflict. NEA also includes Japan, which shows all the markers of strong group selection, that is high ethnocentrism, conformity, in-group loyalty and sacrifice, and a very strong honor culture. If genius is a product of strong group selection as warring tribes are strongly rewarded by genius contributions in weaponry etc that one would expect genius to be strongly tied to selection, which appears not the case. Europeans show lower ethnocentrism and group selection than North East Asians on almost all metrics according to Dutton’s research which refuted some of Rushton contradictory findings. A usual argument in the HBD community, and mainly espoused by Dutton, is that the ultra-harsh ecology of NEA featuring frigidly cold winters pushes the population into a regime of stabilizing selection (selection that reduces variance), this would result in low frequencies of outlier individuals. No genetic or trait analysis has been performed to compare the degree of variance in key traits such as g, personality, or brain size. What is needed is a global study of the coefficients of additive genetic variation (CVA) to ascertain the degree of historical stabilizing vs disruptive selection. Genius has been argued to be under negative frequency depended selection, where essentially the trait is only fitness salient if it remains rare, there is little reason to believe genius falls under this category. High cognitive ability would be universally under selection, and outlier abilities would simply follow that weak directional selection. Insofar Dutton is correct that genius may come with a fitness reducing baggage, such as bizarre or deviant personality and or general anti-social tendencies. This has been argued repeatedly but has never been convulsively demonstrated. The last remaining theory is the androgen mediated genius hypothesis. If one correlated per capita Nobel prizes with rate of left-handedness as a proxy for testosterone, or national differences in testosterone directly (I don’t believe Dutton did that), then when analyzing only countries with a minimum IQ of 90, testosterone correlates more strongly than IQ since the extremely low per capita Nobel prize rates in NEA cause the correlation to collapse.

In summary, basic logic points to some decline, but more modest, perhaps at best 5 points since 1850.

To be generous to the possibility Victorian g was markedly higher, we run a basic analysis to estimate the current historical frequency of outlier levels of g assuming Victorian g of 112. 

We use the example of the British Isles for this simple experiment. In 1700, the population of England and Wales was 5,200,000. Two decades into this century, the population increased to 42,000,000, this is excluding immigrants and non-English natives. Charlton and Woodley infer a loss of 1 SD from 1850 onward, we use a more conservative estimate of 0.8 SD + as the mean as the pre-industrial peak. 

This would mean 1700 England would have produced 163,000 individuals with cognitive abilities of 140 from a mean of 112 and an SD of 15. In today’s population, we assume the variance increased slightly due to increasing genetic diversity and stronger assortative mating, we use a slightly higher variance, SD 15.5, with a mean of 100. From today’s population of white British standing at 42,000,000, there are 205,000 individuals with an SD 2.6 times above the current Greenwich mean. If we assume there has been no increase in the variance, which is unlikely considering the increase in genetic diversity due to an expanding population providing room for more mutation, then the number is 168,000.

Three themes can be inferred from this very crude estimate.

The total number of individuals with extremely high cognitive ability may very well have fallen as a percentage, but the total number has remained remarkably steady when accounting for the substantial increase in population.

Secondly, this would indicate high g in today’s context may mean something very different from high g in a pre-industrial setting.

Thirdly, the global population of high g individuals is extraordinary, strongly indicating the pre-industrial population possessed traits not measurable by g alone which accounted for their prodigious creative abilities, and this was likely confined to European populations, but there is no reason to believe this enigmatic unnamed trait was normally distributed and thus followed a similar pattern to standard g, thus today’s population would necessarily produce fewer as a ratio, but at an aggregate level, the total number would remain steady. With massive populations in Asia, primarily India and China, a rough estimate based on Lynn’s IQ estimates give around 13,500,000 individuals in China with an IQ of 140 based on a mean of 105 and an SD of 15. There’s no evidence East Asian SDs are smaller than Europeans as claimed by many in the informal HBD community. While China excels in fields like telecommunication, artificial intelligence, and advanced manufacturing (high speed rail etc), there has been little in the way of major breakthrough innovations on par with pre-Modern European genius, especially in theoretical science, despite massive numerical advantage, 85x more than in 1700 England. Genius is thus a specialized ability not captured by g tests. It seams genius is enabled by g, that is in some form of synergistic epistasis, where genius is “activated” by a certain threshold of g in the presence of one or more unrelated and unknown cognitive traits, often claimed to be a cluster of unique personality traits, although this model has yet to be proven. India with a mean of 76 from Dave Becker’s dataset, assuming a standard SD (India’s ethnic and caste diversity would strongly favor a larger SD), but for the sake of this estimate, we use an SD of 16. We are left with 41,000 individuals in India with this cutoff, this number does not reconcile with the number of high-ability individuals that India is producing, so we assume either the mean of 76 is way too low, or the SD must be far higher. Even with just 40,000, non of these individuals are displaying any extraordinary abilities closely comparable to genius in pre-Modern Europe, indicating that either there are deep racial differences in creative potential, or that g alone must be failing to capture these abilities. Indian populations are classified as closer to Caucasoid according to genetic ancestry modeling, which allows us to speculate as to whether they are closer to Caucasoid in personality traits, novelty-seeking, risk-taking, androgen profiles, and assorted other traits that contribute to genius. Dutton and Kura 2016.

Despite Europe’s prodigious achievements in technology and science which have remained totally unsurpassed by comparably intelligent civilizations, ancient China did muster some remarkable achievements. Lynn says: “One of the most perplexing problems for our theory is why the peoples of East Asia with their high IQs lagged behind the European peoples in economic growth and development until the second half of the twentieth century. Until more parsimonious models on the origin of creativity and genius abilities are developed, rough historiometric analysis using RT as the sole proxy may be of limited use. Figueredo and Woodley developed a diachronic lexicographic model using high order woods as another proxy for g. The one issue with this model is that this may be simply measuring a natural process of language simplification over time, which may reflect an increasing emphasis on the speed of information delivery rather than pure accuracy. It is logical to assume in a modern setting where information density and speed of dissemination are extremely important, a smaller number of simpler words are more frequently used (Zipf’s law). Additionally, the fact that far fewer individuals, likely only those of the highest status, were engaging in writing in pre-modern times, should not be overlooked. Most of the population would not have had access to the leisure time to engage in writing, whereas in modern times the nature of written text reflects the palatability of a more simplistic writing style to cater to the masses.

Forbes, G, 1945. The effect of certain variables on visual and auditory reaction times. Journal of Experimental Psychology.

Woods et al (2015). Factors influencing the latency of simple reaction time. Front. Hum. Neurosci

Dodonova etal 2013. Is there any evidence of historical slowing of reaction time? No, unless we compare apples and oranges. Intelligence




Woodley and te Nijenhuis 2013. Were the Victorians cleverer than us? The decline in general intelligence estimated from a meta-analysis of the slowing of simple reaction time. Intelligence

Detailed statistics

155 157 154 191 157 164 151 173 158 134 179 152 172 176 163 139 155 182 166 169 179 155 152 169 205 170 149 143 170 142 143 149 174 130 149 139 142 170 127 131 152 127 136 124 125 157 149 127 124 139 158 149 130 149 136 155 143 145 185 152 105 152 130 139 139 140 130 152 166 158 134 142 128 140 155 127 131 139 145 146 139 127 152 145 142 140 143 112 182 185 133 133 130 145 154 158 152 161 152 173 134 145 133 139 148 152 173 158 176 151 181 155 176 149 157 163 167 143 160 145 200 182 140 155 154 148 140 173 173 152 142 143 127 136 164 139 133 145 146 142 149 140 142 124 151 182 166 133 170 152 164 181 121 170 185 164 133 133 149 146 149 119 188 154 150 146 143 151 173 152 160 157 167 148 145 140 155 182 139 166 163 152 170 169 149 136 155 167 154 179 148 155 124 170 134 155 151 181 146 130 173 194 140 131 149 172 182 149 161 155 151 167 157 151 143 142 169 163 136 157 164 133 131 173 133 151 133 143 160 139 157 164 130 131 173 133 151 133 143 152 149 157 142 139 164 136 142 158 145 155 130 166 136 148 133 161 134 145 151 173 146 142 152 166 158 151 173 148 161 172 143 130 148 155 163 142 176 164 173 166 160 142 133 124 152 137 170 142 133 118 152 145 124 151 130 137 157 157 164 155 149 136 137 131 161 142 143 148 115 161 148 167 151 130 139 154 142 149 143

All reaction times recorded

Standard Deviation =16.266789

Varianceσ  =264.60843

Count =319

Mean  = 150.81505

Sum of Squares SS = 84410.088

Anhydrous ammonia prices rise to nearly $730/ton in July

Fullscreen capture 772021 64505 PM.bmp

Anhydrous ammonia (NH3) prices to rise. The price increase is roughly commensurate with the uptick in natural gas prices to $3.6/1000ft3, a high not seen since 2018 (excluding the momentary jump in February caused by an aperiodic cold event in Texas), we can expect if oil reaches a sustained period of $100+, natural gas will follow its usual ratio with oil, sending anhydrous well above 800, likely in the 900 range. Pochari Technologies’ process intensified ammonia system will prove exceedingly more competitive in this future peak hydrocarbon environment. The beauty of this technology is instead of being dependent on an inherently volatile commodity (natural gas), which for the most part, is an exhaustible resource, hence a gradual increase in price over time, Pochari Technologies is only reliant on polysilicon as a commodity, which will continue to go down in price with increased production since silica is effectively inexhaustible, 46% of the earth’s crust! Note that according to the USDA statistic, there are effectively no sellers offering price below 700, so the standard deviation (SD) is very small. This means it’s unlikely for some farmers to be able to snatch up good deals if they are savvy buyers.

Reduced CAPEX alkaline electrolyzers using commercial-off the shelf component (COTS) design philosophy.

Posted on  by christophepochari


Dramatically reducing the cost of alkaline water electrolyzers using high surface area mesh electrodes, commercial off the shelf components and non-Zirfon diaphragm separators.

Christophe Pochari, Pochari Technologies, Bodega Bay California

Alkaline electrolyzer technology is ripe for dramatic cost reduction. Current alkaline electrolyzer technology is excessively expensive beyond what material costs would predict, mainly due to very small production volumes, a noncompetitive market with a small number of big players, and relatively little use of COTS (commercial off the shelf) methodology of cost reduction. Pochari Technologies’s researchers have thus applied this methodology to finally bring to market affordable hydrogen generators fabricated from readily available high-quality components, raw materials, and equipment procured on Alibaba.com ready to be assembled as kits to reduce labor costs. An alkaline cell is a relatively simple system, consisting of four major components: The electrode (a woven wire mesh), a gasket (made of cheap synthetic rubbers, EPDM etc), and a material for fabricating the diaphragm membrane for separating and oxygen and hydrogen while permitting sufficient ionic conductivity (usually composites of potassium titanate (K2TiO3) fibers and polytetrafluoroethylene (PTFE) (as felt and as woven), polyphenylene sulfide coated with zirconium oxide, (Zirfon), or polysulfone, and asbestos coated with polysulfone. Many polymers are suitable for constructing separators, such as Teflon® and polypropylene”. “A commercially available polyethersulfone ultrafiltration membrane (marketed as Pall Corporation, Supor®-200) with a pore size of 0.2 um and a thickness of 140 um was employed as the separator between the electrodes”. Nylon monofilament mesh with a size of over 600 mesh/inch, or a pore size of 5 micron can also be used. Polyethersulfone is ideal due to small more size, retaining high H2/O2 selectivity at elevated pressures. It can handle temperatures up to 130 C. If polyethersulfone is not satisfactory (excessive degredation rate if temperature is above 50 C), Zirfon-clones are available to purchase on B2B marketplaces https://b2b.baidu.com for $30/m2 from Shenzhen Maibri Technology Co., Ltd.

The fourth component are the “end plates” which consist of heavy-duty metallic or composite flat sheets which house a series of rods tightly pressing the stacks to maintaining sufficient pressure within the stack sandwich. For higher pressure systems, such as up to 30 bar, the endplates encounter significant force. Unlike PEM technology, noble mineral intensity in alkaline technology is relatively small, if nickel is to be considered a “noble” metal, than alkaline technology is intermediate. Nickel is not abundnant but not rare either, it’s approximately the 23rd most abundant element. For an alkaline electrolyzer using a high surface area electrode, a nickel mesh electrode loading of under 500 grams/m2 of active electrode surface area is needed to achieve anode life of 5 or more years assuming a corrosion rate of below 0.25 MPY. With current densities of 500 miliamp/cm2 at 1.7-2 volts being achievable at 25-30% KOH concentration, power densities of nearly 10 kW/m2 are realizable. This means a one megawatt electrolyzer at an efficiency of 75% (45 kWh/kg-H2 LHV) would use 118 square meters of active electrode surface area. Assuming a surface/density ratio of a standard 80×80 mesh, 400 grams of nickel is used per square meter of total exposed area of the mesh wires. Thus, a total of 2.25 kg of nickel is needed to produce 1 kg of hydrogen per hour. For a 1 megawatt cell, the nickel would cost only $1000 assuming $20/kg. This number is simply doubled if the TBO of the cell is desired to increase to 10 years, or if the power density of the cell is halved. Pochari Technologies is planning on using carbon-steel electrodes to replace nickel in the future to further redux CAPEX below $30/kW, our long term goal is $15/kW, compared to $500 for today’s legacy system from Western manufacturers. Carbon steel exhibited a corrosion rate of 0.66 MPY, while this is significantly above nickel, the cost of iron is $200 per ton (carbon steel is $700/ton), while nickel is $18,000, so despite a corrosion rate of at least 3x higher, the cost is 25x lower, yielding of 8.5x lower for carbon steel. The disadvantage of carbon steel despite the lower capex is decreased MTBO (mean time before overhaul). Pochari Technologies has designed the cell to be easier to disassemble to replace the corroded electrodes, we are also actively studying low-corrosion ionic liquids to replace potassium hydroxide. We are actively testing a 65Mn (0.65% C) carbon steel electrode under 20% KOH at up to 50 C and experiencing low corrosion rates confirming previous studies. Pochari Technologies is testing these carbon steel electrodes for 8000 hours to ascertain an exact mass loss estimate.

What kind of current density can be achieved by smooth plates?

Current densities of 200mA/cm2 at 1.7 volts (3.4 kW/m2) generates an efficiency of 91% even with non-activated nickel electrodes.

Fullscreen capture 1152022 113119 AM.bmp

Fullscreen capture 1152022 113123 AM.bmp

For a lower corrosion rate of 1 um/yr, a total mass loss of 7% per year will occur with a surface/mass ratio of 140 grams/m2-exposed area, the nickel requirement is only $350 or 17.5 kg for one megawatt! Although this number is achievable, higher corrosion rates will likely be encountered. To ensure sufficient electrode reserve, a nickel loading of around 400-500 grams/m2 is chosen. Pure nickel experiences an excessively high corrosion rate when it is “active”, it becomes “passive” when a sufficient concentration of iron (NiFe2O4), or silicate is found in the oxide layer. For Incoloy alloy 800 with 30% Ni, 20% Cr and 50% Fe experiences a corrosion rate of 1 um/yr at 120 C in 38% KOH, pure nickel is over 200 um. “The “active” corrosion of nickel corresponds to the intrinsic behavior of this metal in oxygenated caustic solutions; the oxide layer is predominantly constituted of NiO at 180°C and of Ni(OH) 2 at 120°C. The nickel corrosion is inhibited when the oxide layer contains a sufficient amount of iron or silicon is present”. The results drawn from this study indicates the ideal alloy contains around 34% Ni, 21% Cr, and 45% Fe. The cost breakdown for the three elements are $18/kg, $9/kg and $0.2/kg, giving an average of $8.1/kg. For a passive corrosion rate of 1 um/yr, a 10% annual material loss corresponds to an electrode mesh loading of 90-100 grams/m2, or $0.11/kW. That is 11 cents per kW! This does not include mesh weaving costs. A 600 mesh weaving machine costs $13,000. The conclusion is meshing costs are very minimal, less than a few cents per square meter.

For the diaphragm separators using a 200 um thick sheet of polyethersulfone (PES), around 20 grams is used per kilowatt, at a typical cost of PES of $25/kg assuming density of 1.37 g/cm2, the cost would be around $0.50/kilowatt assuming an electrode power density of 6.8 kW/m2 (400 miliamps at 1.7 volts). Since Pochari Technologies’ always adheres to COTS methodology, the expensive and specialized Zirfon membrane is dispensed with in favor of a more ubiquitous material, this saves considerable cost and eases manufacturability as the need to purchase a specialized hard to access material is eliminated. Gasket costs are virtually negligible, with only 4.8 grams of rubber needed per kilowatt, EPDM rubber price are typically in the range of $2-4/kg. For 30% NaOH at 117 C, a corrosion rate of 0.0063 millimeter per year (0.248 MPY) is observed for an optimal nickel concentration of 80%. This means 55 grams of Ni is lost for one square meter, if we choose 10% per year as an acceptable weight loss, we return to 550 grams per square meter as the most realistic target nickel loading, with much lower loading achievable with reduced corrosion rates. A lower concentration of KOH/NaOH and lower operating temperature can be utilized as a trade-off between corrosion and power density. The total selling price of these units cost including labor and installation is $30/kW. In 2006, GE estimated alkaline electrolyzers could be produced for $100/kW, clearly, must lower prices are possible today. At an efficiency of 6.5 MMW (47.5 kWh/kg-H2), the price is $1430/kg-hour. After the cell stack costs, which we demonstrated can be made very minimal with COTS design philosophy, the second major cost contributor is the power supply. For a DC 12 volt power supply, $50 is a typical price of 1000 watt DC power module. This, in compendium, alkaline stack costs are effectively miniscule, and the cost structure is dominated by the power supplies and unique requirements of low voltage direct current high amperage power. High efficiency DC power supplies cost as little as $30/kW and last over 100,000 hours.

It should be noted the activity of the nickel electrode depends heavily on its morphology. A smooth sheet has very little activity and is thus not suitable for industrial scales, although for small electrolyzers, a smooth catalyst can be sufficient if power density is not an exigency. Catalysts activity depends not on the total surface area available exposed to the reactant material, rather, catalyst activity depends almost exclusively on the so called “active sites” or “absorption sites” comprised of kink sites, ledges and steps, adatoms and holes. These sites, characterized by local geometric perturbation, account for effectively all the activity of a catalyst. It can be said that the vast majority of the catalyst area is not active. By achieving a high fraction of active sites, current density holding voltage constant can be increased 10 fold.

The most challenging aspect of manufacturing a high performance AWE is catalyst preparation. Plasma spraying is the most common method to achieve a highly denticulate surface. Raney nickel, an alloy comprised of aluminum and nickel, is sprayed on the bare nickel surface, the high velocity and temperature of the metal particle cause them to mechanically adhere to the nickel surface. This process is called sputtering deposition. After the material has cooled and solidified, the aluminum is then leached (extracted) from the surface using a caustic solution. Pochari Technologies is developing a low cost plasma spraying machine using ubiquitous microwave components to perform catalyst preparation. On catalyst surface preparation is complete, the electrolyzer is ready to assemble.

Fullscreen capture 9202021 15337 AM.bmp

Fullscreen capture 7202021 100905 AM.bmp

Fullscreen capture 5132021 14630 AM.bmp

180 C at 38% wt KOH at 4 MPa Oxygen

Fullscreen capture 5132021 14625 AM.bmp

150 C at 38% wt KOH at 4 MPa Oxygen

Fullscreen capture 5132021 14606 AM.bmp

120 C at 38% wt KOH at 4 MPa Oxygen

Fullscreen capture 5132021 14641 AM.bmp
Fullscreen capture 5112021 14702 PM.bmp
Fullscreen capture 5102021 44258 PM.bmp
Fullscreen capture 5102021 41433 PM.bmp
Fullscreen capture 582021 114139 PM.bmp
Fullscreen capture 552021 83800 PM.bmp
Fullscreen capture 6132021 100201 PM.bmp

Typical alkaline electrolyzer degradation rate. The degradation rate varies from as little as 0.25% per year to nearly 3%. This number is almost directly a function of the electrocatalyst deactivation due to corrosion.

Fullscreen capture 6132021 80309 PM.bmp

Diaphragm membrane rated for up to 100 C in 70% KOH for $124/m2: $8.8/kW

Fullscreen capture 6112021 15334 PM.bmp

*Note: Sandvik materials has published data on corrosion rates of various alloys under aeriated sodium hydroxide solutions (the exact conditions found in water electrolyzers), and found that carbon steel with up to 30% sodium hydroxide provided temperatures are kept below 80 Celsius.