Using AI based diffusive size factors for extracted networks#

This notebook illustrates the use of the deep learning based diffusive conductance algorithm decribed here. PoreSpy’s diffusive_size_factor_AI includes the steps for predicting the diffusive size factors of the conduit images. Note that the diffusive conductance of the conduits can be then calculated by multiplying the size factor by diffusivity of the phase. The function takes in the images of segmented porous medium and returns an array of diffusive size factors for all conduits in the image. Therefore, the framework can be applied to both one conduit image as well as a segmented image of porous medium:

PS_dl

Trained model and supplementary materials#

To use the diffusive_size_factor_AI, the trained model, and training data distribution are required. The AI model files and additional files used in this example are available here. The folder contains following files:

  • Trained model weights: This file includes only weights of the deep learning layers. To use this file, the Resnet50 model structure must be built first.

  • Trained data distribution: This file will be used in denormalizing predicted values based on normalized transform applied on training data. The denormalizing step is included in diffusive_size_factor_AI method.

  • Finite difference diffusive conductance: This file is used in this example to compare the prediction results with finite difference method for segmented regions. Note: Finite difference-based size factors can be calculated using PoreSpy’s diffusive_size_factor_DNS method.

Let’s download the tensorflow files required to run this notebook:

try:
    import tensorflow as tf
except ImportError:
    !pip install tensorflow

try:
    import sklearn
except ImportError:
    !pip install scikit-learn

import os

if not os.path.exists("sf-model-lib"):
    !git clone https://github.com/PMEAL/sf-model-lib
2024-02-05 03:30:48.313571: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-02-05 03:30:48.352628: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-02-05 03:30:48.353661: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-02-05 03:30:49.113181: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT

Also, since the model weights have been stored in chunks, they need to be recombined first:

import importlib
h5tools = importlib.import_module("sf-model-lib.h5tools")
DIR_WEIGHTS = "sf-model-lib/diffusion"
fname_in = [f"{DIR_WEIGHTS}/model_weights_part{j}.h5" for j in [0, 1]]
h5tools.combine(fname_in, fname_out=f"{DIR_WEIGHTS}/model_weights.h5")

Note that to use diffusive_size_factor_AI, Scikit-learn and Tensorflow must be installed. Import necessary packages and the AI model:

import os
import warnings

import h5py
import numpy as np
import openpnm as op
import porespy as ps
import scipy as sp
from matplotlib import pyplot as plt
from sklearn.metrics import r2_score

ps.visualization.set_mpl_style()
warnings.filterwarnings("ignore")
path = "./sf-model-lib/diffusion"
path_train = os.path.join(path, 'g_train_original.hdf5')
path_weights = os.path.join(path, 'model_weights.h5')
g_train = h5py.File(path_train, 'r')['g_train'][()]
model = ps.networks.create_model()
model.load_weights(path_weights)
[03:30:50] ERROR    PARDISO solver not installed, run `pip install pypardiso`. Otherwise,          _workspace.py:56
                    simulations will be slow. Apple M chips not supported.                                         
[03:30:53] WARNING  `lr` is deprecated in Keras optimizer, please use `learning_rate` or use the   optimizer.py:123
                    legacy optimizer, e.g.,tf.keras.optimizers.legacy.Adam.                                        

Create test image#

We can create a 3D image using PoreSpy’s poly_disperese_spheres generator:

np.random.seed(17)
shape = [120, 120, 120]
dist = sp.stats.norm(loc=7, scale=5)
im = ps.generators.polydisperse_spheres(shape=shape,
                                        porosity=0.7,
                                        dist=dist,
                                        r_min=7)
fig, ax = plt.subplots(1, 1, figsize=[4, 4])
ax.imshow(im[:, :, 20], origin='lower', interpolation='none')
ax.axis(False);
../../../_images/7bf8f68271bb9b92e0f4bce6c532b1b8fe023b786dd29a3c8a18d2e580b00ede.png

Extract the network#

We then extract the pore network of the porous medium image using PoreSpy’s snow2 algorithm. snow2 returns the segmented image of the porous medium as well as extracted network data.

snow = ps.networks.snow2(im, boundary_width=0, parallelization=None)
regions = snow.regions
net = snow.network

Apply diffusive_size_factor_AI#

AI_based diffusive size factors of conduits in the extracted network can then be calculated applying diffusive_size_factor_AI on the segmented regions. We can then define throat.diffusive_size_factor_AI property and assign the predicted size_factor to this property.

conns = net['throat.conns']
size_factors = ps.networks.diffusive_size_factor_AI(regions,
                                                    model=model,
                                                    g_train=g_train,
                                                    throat_conns=conns)
net['throat.diffusive_size_factor_AI'] = size_factors
2024-02-05 03:31:08.320062: W tensorflow/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 780140544 exceeds 10% of free system memory.
2024-02-05 03:31:08.724229: W tensorflow/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 780140544 exceeds 10% of free system memory.
 1/47 [..............................] - ETA: 1:44

 2/47 [>.............................] - ETA: 1:10

 3/47 [>.............................] - ETA: 1:08

 4/47 [=>............................] - ETA: 1:06

 5/47 [==>...........................] - ETA: 1:05

 6/47 [==>...........................] - ETA: 1:03

 7/47 [===>..........................] - ETA: 1:02

 8/47 [====>.........................] - ETA: 1:00

 9/47 [====>.........................] - ETA: 59s 

10/47 [=====>........................] - ETA: 57s

11/47 [======>.......................] - ETA: 55s

12/47 [======>.......................] - ETA: 54s
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
Cell In[6], line 2
      1 conns = net['throat.conns']
----> 2 size_factors = ps.networks.diffusive_size_factor_AI(regions,
      3                                                     model=model,
      4                                                     g_train=g_train,
      5                                                     throat_conns=conns)
      6 net['throat.diffusive_size_factor_AI'] = size_factors

File ~/work/porespy/porespy/porespy/networks/_size_factors.py:81, in diffusive_size_factor_AI(regions, throat_conns, model, g_train, voxel_size)
     79     batch_size = 16
     80 test_steps = math.ceil(len(throat_conns) / batch_size)
---> 81 predictions = model.predict(test_data, steps=test_steps)
     82 predictions = np.squeeze(predictions)
     83 denorm_size_factor = _denorm_predict(predictions, g_train)

File /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/keras/src/utils/traceback_utils.py:65, in filter_traceback.<locals>.error_handler(*args, **kwargs)
     63 filtered_tb = None
     64 try:
---> 65     return fn(*args, **kwargs)
     66 except Exception as e:
     67     filtered_tb = _process_traceback_frames(e.__traceback__)

File /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/keras/src/engine/training.py:2554, in Model.predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
   2552 for step in data_handler.steps():
   2553     callbacks.on_predict_batch_begin(step)
-> 2554     tmp_batch_outputs = self.predict_function(iterator)
   2555     if data_handler.should_sync:
   2556         context.async_wait()

File /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tensorflow/python/util/traceback_utils.py:150, in filter_traceback.<locals>.error_handler(*args, **kwargs)
    148 filtered_tb = None
    149 try:
--> 150   return fn(*args, **kwargs)
    151 except Exception as e:
    152   filtered_tb = _process_traceback_frames(e.__traceback__)

File /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py:825, in Function.__call__(self, *args, **kwds)
    822 compiler = "xla" if self._jit_compile else "nonXla"
    824 with OptionalXlaContext(self._jit_compile):
--> 825   result = self._call(*args, **kwds)
    827 new_tracing_count = self.experimental_get_tracing_count()
    828 without_tracing = (tracing_count == new_tracing_count)

File /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py:864, in Function._call(self, *args, **kwds)
    861 self._lock.release()
    862 # In this case we have not created variables on the first call. So we can
    863 # run the first trace but we should fail if variables are created.
--> 864 results = self._variable_creation_fn(*args, **kwds)
    865 if self._created_variables and not ALLOW_DYNAMIC_VARIABLE_CREATION:
    866   raise ValueError("Creating variables on a non-first call to a function"
    867                    " decorated with tf.function.")

File /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compiler.py:148, in TracingCompiler.__call__(self, *args, **kwargs)
    145 with self._lock:
    146   (concrete_function,
    147    filtered_flat_args) = self._maybe_define_function(args, kwargs)
--> 148 return concrete_function._call_flat(
    149     filtered_flat_args, captured_inputs=concrete_function.captured_inputs)

File /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tensorflow/python/eager/polymorphic_function/monomorphic_function.py:1349, in ConcreteFunction._call_flat(self, args, captured_inputs)
   1345 possible_gradient_type = gradients_util.PossibleTapeGradientTypes(args)
   1346 if (possible_gradient_type == gradients_util.POSSIBLE_GRADIENT_TYPES_NONE
   1347     and executing_eagerly):
   1348   # No tape is watching; skip to running the function.
-> 1349   return self._build_call_outputs(self._inference_function(*args))
   1350 forward_backward = self._select_forward_and_backward_functions(
   1351     args,
   1352     possible_gradient_type,
   1353     executing_eagerly)
   1354 forward_function, args_with_tangents = forward_backward.forward()

File /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tensorflow/python/eager/polymorphic_function/atomic_function.py:196, in AtomicFunction.__call__(self, *args)
    194 with record.stop_recording():
    195   if self._bound_context.executing_eagerly():
--> 196     outputs = self._bound_context.call_function(
    197         self.name,
    198         list(args),
    199         len(self.function_type.flat_outputs),
    200     )
    201   else:
    202     outputs = make_call_op_in_graph(self, list(args))

File /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tensorflow/python/eager/context.py:1457, in Context.call_function(self, name, tensor_inputs, num_outputs)
   1455 cancellation_context = cancellation.context()
   1456 if cancellation_context is None:
-> 1457   outputs = execute.execute(
   1458       name.decode("utf-8"),
   1459       num_outputs=num_outputs,
   1460       inputs=tensor_inputs,
   1461       attrs=attrs,
   1462       ctx=self,
   1463   )
   1464 else:
   1465   outputs = execute.execute_with_cancellation(
   1466       name.decode("utf-8"),
   1467       num_outputs=num_outputs,
   (...)
   1471       cancellation_manager=cancellation_context,
   1472   )

File /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tensorflow/python/eager/execute.py:53, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     51 try:
     52   ctx.ensure_initialized()
---> 53   tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
     54                                       inputs, attrs, num_outputs)
     55 except core._NotOkStatusException as e:
     56   if name is not None:

KeyboardInterrupt: 

The resulting network can then be imported to OpenPNM for later use such as diffusive mass transport simulations problems. Let’s visualize the network:

pn = op.io.network_from_porespy(net)
fig, ax = plt.subplots(1, 1, figsize=[5, 5])
ax = op.visualization.plot_connections(network=pn, alpha=0.8, color='grey', ax=ax)
ax = op.visualization.plot_coordinates(network=pn, ax=ax, color='b', markersize=50)
../../../_images/3840f263cefd1682691da15fb681c12c695b94cae604d420fd49c89dbb0f37e5.png

Compare with finite difference method#

Now that the extracted network includes AI_based diffusive size factor data, we can use the network to compare the accuracy of diffusive_size_factor_AI, shape factor method,and geometry method (no shape factor) in contrast to finite difference method. Assuming a generic phase with diffusivity of 1, the diffusive conductance of the conduits will be equal to their diffusive size factors. The diffusive conductance of the conduits can be calculated using OpenPNM’s generic_diffusive method. The diffusive conductance of the conduits using shape factor based method assuming cones and cylinders shapes for pores and throats can be calculated as follows:

print(pn)
══════════════════════════════════════════════════════════════════════════════
net : <openpnm.network.Network at 0x7f0256f989a0>
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
  #  Properties                                                   Valid Values
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
  2  throat.conns                                                    744 / 744
  3  pore.coords                                                     199 / 199
  4  pore.region_label                                               199 / 199
  5  pore.phase                                                      199 / 199
  6  throat.phases                                                   744 / 744
  7  pore.region_volume                                              199 / 199
  8  pore.equivalent_diameter                                        199 / 199
  9  pore.local_peak                                                 199 / 199
 10  pore.global_peak                                                199 / 199
 11  pore.geometric_centroid                                         199 / 199
 12  throat.global_peak                                              744 / 744
 13  pore.inscribed_diameter                                         199 / 199
 14  pore.extended_diameter                                          199 / 199
 15  throat.inscribed_diameter                                       744 / 744
 16  throat.total_length                                             744 / 744
 17  throat.direct_length                                            744 / 744
 18  throat.perimeter                                                744 / 744
 19  pore.volume                                                     199 / 199
 20  pore.surface_area                                               199 / 199
 21  throat.cross_sectional_area                                     744 / 744
 22  throat.equivalent_diameter                                      744 / 744
 23  throat.diffusive_size_factor_AI                                 744 / 744
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
  #  Labels                                                 Assigned Locations
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
  2  pore.all                                                              199
  3  throat.all                                                            744
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
pn['pore.diameter'] = pn['pore.inscribed_diameter']
pn['throat.diameter'] = pn['throat.inscribed_diameter']
pn['throat.coords'] = pn['throat.global_peak']
pn.add_model(propname='throat.length',
             model=op.models.geometry.throat_length.hybrid_cones_and_cylinders)
pn.add_model(propname='throat.diffusive_size_factors',
             model=op.models.geometry.diffusive_size_factors.cones_and_cylinders,)
phase = op.phase.Phase(network=pn)
phase['pore.diffusivity'] = 1
phase['throat.diffusivity'] = 1
phase.add_model(propname='throat.diffusive_conductance',
                model=op.models.physics.diffusive_conductance.generic_diffusive)
g_SF = np.copy(phase['throat.diffusive_conductance'])

To find the diffusive conductance of the conduit using geometry method (no shape factor) we assume cylindrical pores and throats:

cn = pn.conns
L1, Lt, L2 = pn['pore.diameter'][
    cn[:, 0]] / 2, pn['throat.length'], pn['pore.diameter'][cn[:, 1]] / 2
D1, Dt, D2 = pn['pore.diameter'][
    cn[:, 0]], pn['throat.diameter'], pn['pore.diameter'][cn[:, 1]]
A1, At, A2 = np.pi * D1**2 / 4, np.pi * Dt**2 / 4, np.pi * D2**2 / 4
g_Geo = 1 / (L1 / A1 + L2 / A2 + Lt / At)

The diffusive conductance of the conduit using AI-based method:

phase.add_model(propname='throat.diffusive_conductance',
                model=op.models.physics.diffusive_conductance.generic_diffusive,
                size_factors='throat.diffusive_size_factor_AI')
g_AI = np.copy(phase['throat.diffusive_conductance'])

The finite difference-based diffusive size factors were calculated using PoreSpy’s size factor method diffusive_size_factor_DNS. However, due to the long runtime of the DNS function the results were saved in the example data folder and used in this example. The Following code was used to estimate the finite difference-based values using PoreSpy:

g_FD = ps.networks.diffusive_size_factor_DNS(regions, conns)

Now let’s compare the diffusive conductance calculated from geometry-based method, shape factor based-method, and AI-based method with reference finite difference method:

fname = os.path.join(path, 'g_finite_difference120-phi7.hdf5')
g_FD = h5py.File(fname, 'r')['g_finite_difference'][()]
max_val = np.max([g_FD, g_AI, g_Geo, g_SF])
fig, ax = plt.subplots(1, 3, figsize=[10, 4])

ax[0].plot(g_FD, g_Geo, '*', [0, max_val], [0, max_val], 'r')
ax[0].set_xlim([0, max_val])
ax[0].set_ylim([0, max_val])
ax[0].set_xlabel('finite difference $g_d$')
ax[0].set_ylabel('geometry based $g_d$')
ax[0].set_title('$R^2$ = ' + str(np.round(r2_score(g_FD, g_Geo), 2)))

ax[1].plot(g_FD, g_SF, '*', [0, max_val], [0, max_val], 'r')
ax[1].set_xlim([0, max_val])
ax[1].set_ylim([0, max_val])
ax[1].set_xlabel('finite difference $g_d$')
ax[1].set_ylabel('shape factor based $g_d$')
ax[1].set_title('$R^2$ = ' + str(np.round(r2_score(g_FD, g_SF), 2)))

ax[2].plot(g_FD, g_AI, '*', [0, max_val], [0, max_val], 'r')
ax[2].set_xlim([0, max_val])
ax[2].set_ylim([0, max_val])
ax[2].set_xlabel('finite difference $g_d$')
ax[2].set_ylabel('AI based $g_d$')
ax[2].set_title(r'$R^2$ = ' + str(np.round(r2_score(g_FD, g_AI), 2)));
../../../_images/eb336ea57afdfc0dbc6a2d90b20d9773052ba33739373bdca35bb2625b06a740.png

As shown in the scatter plots, the AI-based diffusive conductance method predicts the conductance values with a higher accuracy than geometry-based and shape factor-based methods. A comprehensive comparison between these methods for a large dataset can be found here.