diffusive_size_factor_AI#

PoreSpy’s diffusive_size_factor_AI includes the steps for predicting the diffusive size factors of the conduit images decribed here. Note that the diffusive conductance of the conduits can be then calculated by multiplying the size factor by diffusivity of the phase. The function takes in the images of segmented porous medium and returns an array of diffusive size factors for all conduits in the image. Therefore, the framework can be applied to both one conduit image as well as a segmented image of porous medium.

PS_dl

Trained model and supplementary materials#

To use the diffusive_size_factor_AI, the trained model, and training data distribution are required. The AI model files and additional files used in this example are available here. The folder contains following files:

  • Trained model weights: This file includes only weights of the deep learning layers. To use this file, the Resnet50 model structure must be built first.

  • Trained data distribution: This file will be used in denormalizing predicted values based on normalized transform applied on training data. The denormalizing step is included in diffusive_size_factor_AI method.

  • Finite difference diffusive conductance: This file is used in this example to compare the prediction results with finite difference method for segmented regions

  • Pair of regions: This file is used in this example to compare the prediction results with finite difference method for a pair of regions

Let’s download the tensorflow files required to run this notebook:

try:
    import tensorflow as tf
except ImportError:
    !pip install tensorflow

try:
    import sklearn
except ImportError:
    !pip install scikit-learn

import os

if not os.path.exists("sf-model-lib"):
    !git clone https://github.com/PMEAL/sf-model-lib
2026-03-01 04:01:10.438060: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2026-03-01 04:01:10.531537: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2026-03-01 04:01:17.074984: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
Cloning into 'sf-model-lib'...
remote: Enumerating objects: 36, done.
remote: Counting objects:  20% (1/5)
remote: Counting objects:  40% (2/5)
remote: Counting objects:  60% (3/5)
remote: Counting objects:  80% (4/5)
remote: Counting objects: 100% (5/5)
remote: Counting objects: 100% (5/5), done.
remote: Compressing objects:  20% (1/5)
remote: Compressing objects:  40% (2/5)
remote: Compressing objects:  60% (3/5)
remote: Compressing objects:  80% (4/5)
remote: Compressing objects: 100% (5/5)
remote: Compressing objects: 100% (5/5), done.
Receiving objects:   2% (1/36)
Receiving objects:   5% (2/36)
Receiving objects:   8% (3/36)
Receiving objects:  11% (4/36)
Receiving objects:  13% (5/36)
Receiving objects:  16% (6/36)
Receiving objects:  16% (6/36), 23.71 MiB | 23.70 MiB/s
Receiving objects:  16% (6/36), 53.76 MiB | 26.86 MiB/s
Receiving objects:  16% (6/36), 76.90 MiB | 25.54 MiB/s
Receiving objects:  19% (7/36), 76.90 MiB | 25.54 MiB/s
Receiving objects:  19% (7/36), 105.27 MiB | 26.21 MiB/s
Receiving objects:  19% (7/36), 131.98 MiB | 28.01 MiB/s
Receiving objects:  19% (7/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  22% (8/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  25% (9/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  27% (10/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  30% (11/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  33% (12/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  36% (13/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  38% (14/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  41% (15/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  44% (16/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  47% (17/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  50% (18/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  52% (19/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  55% (20/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  58% (21/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  61% (22/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  63% (23/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  66% (24/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  69% (25/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  72% (26/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  75% (27/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  77% (28/36), 158.94 MiB | 25.93 MiB/s
Receiving objects:  77% (28/36), 180.95 MiB | 25.49 MiB/s
Receiving objects:  77% (28/36), 202.42 MiB | 24.74 MiB/s
Receiving objects:  77% (28/36), 222.45 MiB | 23.25 MiB/s
Receiving objects:  77% (28/36), 249.32 MiB | 22.57 MiB/s
Receiving objects:  80% (29/36), 249.32 MiB | 22.57 MiB/s
Receiving objects:  80% (29/36), 275.94 MiB | 23.59 MiB/s
Receiving objects:  80% (29/36), 306.01 MiB | 25.41 MiB/s
Receiving objects:  80% (29/36), 330.09 MiB | 26.33 MiB/s
remote: Total 36 (delta 0), reused 2 (delta 0), pack-reused 31 (from 1)
Receiving objects:  83% (30/36), 330.09 MiB | 26.33 MiB/s
Receiving objects:  86% (31/36), 330.09 MiB | 26.33 MiB/s
Receiving objects:  88% (32/36), 330.09 MiB | 26.33 MiB/s
Receiving objects:  91% (33/36), 330.09 MiB | 26.33 MiB/s
Receiving objects:  94% (34/36), 330.09 MiB | 26.33 MiB/s
Receiving objects:  97% (35/36), 330.09 MiB | 26.33 MiB/s
Receiving objects: 100% (36/36), 330.09 MiB | 26.33 MiB/s
Receiving objects: 100% (36/36), 334.57 MiB | 25.38 MiB/s, done.
Resolving deltas:   0% (0/8)
Resolving deltas:  12% (1/8)
Resolving deltas:  25% (2/8)
Resolving deltas:  37% (3/8)
Resolving deltas:  50% (4/8)
Resolving deltas:  62% (5/8)
Resolving deltas:  75% (6/8)
Resolving deltas:  87% (7/8)
Resolving deltas: 100% (8/8)
Resolving deltas: 100% (8/8), done.
Updating files:  62% (5/8)
Updating files:  75% (6/8)
Updating files:  87% (7/8)
Updating files: 100% (8/8)
Updating files: 100% (8/8), done.

Also, since the model weights have been stored in chunks, they need to be recombined first:

import importlib

h5tools = importlib.import_module("sf-model-lib.h5tools")
DIR_WEIGHTS = "sf-model-lib/diffusion"
fname_in = [f"{DIR_WEIGHTS}/model_weights_part{j}.h5" for j in [0, 1]]
h5tools.combine(fname_in, fname_out=f"{DIR_WEIGHTS}/model_weights.h5")

Note that to use diffusive_size_factor_AI, Scikit-learn and Tensorflow must be installed. Import necessary packages and the AI model:

import inspect
import os
import warnings

import h5py
import numpy as np
import scipy as sp
from matplotlib import pyplot as plt
from sklearn.metrics import r2_score

import porespy as ps

ps.visualization.set_mpl_style()
warnings.filterwarnings("ignore")
inspect.signature(ps.networks.diffusive_size_factor_AI)
<Signature (regions, throat_conns, model, g_train, voxel_size=1)>

model , g_train#

Import AI model and training data from the downloaded folder:

path = "./sf-model-lib/diffusion"
path_train = os.path.join(path, "g_train_original.hdf5")
path_weights = os.path.join(path, "model_weights.h5")
g_train = h5py.File(path_train, "r")["g_train"][()]
model = ps.networks.create_model()
model.load_weights(path_weights)
2026-03-01 04:01:38.996856: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)

regions#

We can create a 3D image using PoreSpy’s poly_disperese_spheres generator and segment the image using snow_partitioning method. Note that find_conns method returns the connections in the segmented region. The order of values in conns is similar to the network extraction conns. Therefore, the region with label=1 in the segmented image is mapped to indice 0 in conns.

np.random.seed(17)
shape = [120, 120, 120]
dist = sp.stats.norm(loc=7, scale=5)
im = ps.generators.polydisperse_spheres(shape=shape, porosity=0.7, dist=dist, r_min=7)
results = ps.filters.snow_partitioning(im=im.astype(bool))
regions = results["regions"]
fig, ax = plt.subplots(1, 1, figsize=[4, 4])
ax.imshow(regions[:, :, 20], origin="lower", interpolation="none")
ax.axis(False);
../../../_images/06c32aea3156fab4356083f84cfd47445c62b5b70c123557202b69d9954f7d50.png

throat_conns#

PoreSpy’s diffusive_size_factor_AI method takes in the segmented image, model, training ground truth values, and the conncetions of regions in the segmented image (throat conns). In this example we have created an image with voxel_size=1. For a different voxel size, the voxel_size argument needs to be passed to the method.

conns = ps.networks.find_conns(regions)
size_factors = ps.networks.diffusive_size_factor_AI(
    regions, model=model, g_train=g_train, throat_conns=conns
)
2026-03-01 04:02:09.031110: W external/local_xla/xla/tsl/framework/cpu_allocator_impl.cc:84] Allocation of 781189120 exceeds 10% of free system memory.
2026-03-01 04:02:09.427662: W external/local_xla/xla/tsl/framework/cpu_allocator_impl.cc:84] Allocation of 781189120 exceeds 10% of free system memory.
 1/47 ━━━━━━━━━━━━━━━━━━━━ 3:04 4s/step

 2/47 ━━━━━━━━━━━━━━━━━━━━ 2:18 3s/step
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
Cell In[6], line 2
      1 conns = ps.networks.find_conns(regions)
----> 2 size_factors = ps.networks.diffusive_size_factor_AI(
      3     regions, model=model, g_train=g_train, throat_conns=conns
      4 )

File ~/work/porespy/porespy/src/porespy/networks/_size_factors.py:84, in diffusive_size_factor_AI(regions, throat_conns, model, g_train, voxel_size)
     82     batch_size = 16
     83 test_steps = math.ceil(len(throat_conns) / batch_size)
---> 84 predictions = model.predict(test_data, steps=test_steps)
     85 predictions = np.squeeze(predictions)
     86 denorm_size_factor = _denorm_predict(predictions, g_train)

File ~/work/porespy/porespy/.venv/lib/python3.13/site-packages/keras/src/utils/traceback_utils.py:117, in filter_traceback.<locals>.error_handler(*args, **kwargs)
    115 filtered_tb = None
    116 try:
--> 117     return fn(*args, **kwargs)
    118 except Exception as e:
    119     filtered_tb = _process_traceback_frames(e.__traceback__)

File ~/work/porespy/porespy/.venv/lib/python3.13/site-packages/keras/src/backend/tensorflow/trainer.py:588, in TensorFlowTrainer.predict(self, x, batch_size, verbose, steps, callbacks)
    586 callbacks.on_predict_batch_begin(begin_step)
    587 data = get_data(iterator)
--> 588 batch_outputs = self.predict_function(data)
    589 outputs = append_to_outputs(batch_outputs, outputs)
    590 callbacks.on_predict_batch_end(
    591     end_step, {"outputs": batch_outputs}
    592 )

File ~/work/porespy/porespy/.venv/lib/python3.13/site-packages/tensorflow/python/util/traceback_utils.py:150, in filter_traceback.<locals>.error_handler(*args, **kwargs)
    148 filtered_tb = None
    149 try:
--> 150   return fn(*args, **kwargs)
    151 except Exception as e:
    152   filtered_tb = _process_traceback_frames(e.__traceback__)

File ~/work/porespy/porespy/.venv/lib/python3.13/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py:833, in Function.__call__(self, *args, **kwds)
    830 compiler = "xla" if self._jit_compile else "nonXla"
    832 with OptionalXlaContext(self._jit_compile):
--> 833   result = self._call(*args, **kwds)
    835 new_tracing_count = self.experimental_get_tracing_count()
    836 without_tracing = (tracing_count == new_tracing_count)

File ~/work/porespy/porespy/.venv/lib/python3.13/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py:878, in Function._call(self, *args, **kwds)
    875 self._lock.release()
    876 # In this case we have not created variables on the first call. So we can
    877 # run the first trace but we should fail if variables are created.
--> 878 results = tracing_compilation.call_function(
    879     args, kwds, self._variable_creation_config
    880 )
    881 if self._created_variables:
    882   raise ValueError("Creating variables on a non-first call to a function"
    883                    " decorated with tf.function.")

File ~/work/porespy/porespy/.venv/lib/python3.13/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py:139, in call_function(args, kwargs, tracing_options)
    137 bound_args = function.function_type.bind(*args, **kwargs)
    138 flat_inputs = function.function_type.unpack_inputs(bound_args)
--> 139 return function._call_flat(  # pylint: disable=protected-access
    140     flat_inputs, captured_inputs=function.captured_inputs
    141 )

File ~/work/porespy/porespy/.venv/lib/python3.13/site-packages/tensorflow/python/eager/polymorphic_function/concrete_function.py:1322, in ConcreteFunction._call_flat(self, tensor_inputs, captured_inputs)
   1318 possible_gradient_type = gradients_util.PossibleTapeGradientTypes(args)
   1319 if (possible_gradient_type == gradients_util.POSSIBLE_GRADIENT_TYPES_NONE
   1320     and executing_eagerly):
   1321   # No tape is watching; skip to running the function.
-> 1322   return self._inference_function.call_preflattened(args)
   1323 forward_backward = self._select_forward_and_backward_functions(
   1324     args,
   1325     possible_gradient_type,
   1326     executing_eagerly)
   1327 forward_function, args_with_tangents = forward_backward.forward()

File ~/work/porespy/porespy/.venv/lib/python3.13/site-packages/tensorflow/python/eager/polymorphic_function/atomic_function.py:216, in AtomicFunction.call_preflattened(self, args)
    214 def call_preflattened(self, args: Sequence[core.Tensor]) -> Any:
    215   """Calls with flattened tensor inputs and returns the structured output."""
--> 216   flat_outputs = self.call_flat(*args)
    217   return self.function_type.pack_output(flat_outputs)

File ~/work/porespy/porespy/.venv/lib/python3.13/site-packages/tensorflow/python/eager/polymorphic_function/atomic_function.py:251, in AtomicFunction.call_flat(self, *args)
    249 with record.stop_recording():
    250   if self._bound_context.executing_eagerly():
--> 251     outputs = self._bound_context.call_function(
    252         self.name,
    253         list(args),
    254         len(self.function_type.flat_outputs),
    255     )
    256   else:
    257     outputs = make_call_op_in_graph(
    258         self,
    259         list(args),
    260         self._bound_context.function_call_options.as_attrs(),
    261     )

File ~/work/porespy/porespy/.venv/lib/python3.13/site-packages/tensorflow/python/eager/context.py:1688, in Context.call_function(self, name, tensor_inputs, num_outputs)
   1686 cancellation_context = cancellation.context()
   1687 if cancellation_context is None:
-> 1688   outputs = execute.execute(
   1689       name.decode("utf-8"),
   1690       num_outputs=num_outputs,
   1691       inputs=tensor_inputs,
   1692       attrs=attrs,
   1693       ctx=self,
   1694   )
   1695 else:
   1696   outputs = execute.execute_with_cancellation(
   1697       name.decode("utf-8"),
   1698       num_outputs=num_outputs,
   (...)   1702       cancellation_manager=cancellation_context,
   1703   )

File ~/work/porespy/porespy/.venv/lib/python3.13/site-packages/tensorflow/python/eager/execute.py:53, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     51 try:
     52   ctx.ensure_initialized()
---> 53   tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
     54                                       inputs, attrs, num_outputs)
     55 except core._NotOkStatusException as e:
     56   if name is not None:

KeyboardInterrupt: 

Compare with finite difference#

Assuming a diffusivity of 1, the diffusive conductance of the conduits will be equal to their diffusive size factors. Now let’s compare the AI-based diffusive conductances with the conductance values calculated by finite difference method. The finite difference method results are found using the steps explained in the cited paper in the introduction.

Note: The finite difference-based diffusive size factors were calculated using PoreSpy’s size factor method diffusive_size_factor_DNS. However, due to the long runtime of the DNS function the results were saved in the example data folder and used in this example. The Following code was used to estimate the finite difference-based values using PoreSpy:

g_FD = ps.networks.diffusive_size_factor_DNS(regions, conns)
fname = os.path.join(path, "g_finite_difference120-phi7.hdf5")
g_FD = h5py.File(fname, "r")["g_finite_difference"][()]
g_AI = size_factors
max_val = np.max([g_FD, g_AI])
plt.figure(figsize=[4, 4])
plt.plot(g_FD, g_AI, "*", [0, max_val], [0, max_val], "r")
plt.xlabel("g reference")
plt.ylabel("g prediction")
r2 = r2_score(g_FD, g_AI)
print(f"The R^2 prediction accuracy is {r2:.3}")

Note on runtime: A larger part of AI_size_factors runtime is related to extracting the pairs of conduits, which is the common step required for both AI and finite difference method. Once the data is prepared, AI Prediction on the tensor takes a smaller amount of time in contrast to finite difference method, as it was shown in the cited reference paper.

Apply on one conduit#

PoreSpy’s diffusive_size_factor_AI method can take in an image of a pair of regions. Let’s predict the diffusice size factor for a pair of image using both AI and DNS methods:

fname = os.path.join(path, "pair.hdf5")
pair_in = h5py.File(fname, "r")
im_pair = pair_in["pair"][()]
conns = ps.networks.find_conns(im_pair)
sf_FD = ps.networks.diffusive_size_factor_DNS(im_pair, throat_conns=conns)
sf_AI = ps.networks.diffusive_size_factor_AI(
    im_pair, model=model, g_train=g_train, throat_conns=conns
)
print(f"Diffusive size factor from FD: {sf_FD[0]:.2f}, and AI: {sf_AI[0]:.2f}")