Well Log Viewer – Python Coding Series

Introduction

To date, we’ve explored some basic production data, fit decline curves, added financial projections to future production, and handled CSVs from a frac stage. In this post, I am going to give some attention to the rock lickers of the energy industry.

Well logs have provided key measurements of the subsurface, starting with the Schlumberger brothers in 1927. The digitizing of logging information is its own separate topic and beyond the scope of this post. I will skip ahead and assume you have an LAS file that you would like to view. If you are questioning what an LAS file is, check out the Canadian Well Logging Society, who maintain the standard. My goal is to provide some comfort in working with LAS files and leave room for deeper exploration and customization.

Getting Started

First, we need an LAS file to work with. These are often proprietary information, but the University Lands has a database with a few available files for us to tinker. The specific file I will use in this post available for download here. If you are new to the LAS file format, you can explore the contents using notepad. I am going to build a Plotly Dash web page that will attempt to categorize, interpret, and display uploaded LAS files.

The Setup

# -*- coding: utf-8 -*-
"""
Created on Fri Jan  1 21:01:23 2021

@author: @FracLost

Well log viewer

1 - Open a blank page
2 - User imports an .LAS file
3 - LAS file is loaded and visualized
4 - User can customize/process data
"""
# Our import section, including dash functionality and additional python libraries
import dash
import dash_core_components as dcc
import dash_html_components as html
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output, State
from plotly import subplots
import plotly.graph_objects as go

import base64
import io
import datetime
import re
import pandas as pd
import numpy as np
import lasio


# Initializing our Dash application
# The meta_tags section is used with the dash_bootstrap_components to help control resizing on smaller devices
# Follow this link for more information on viewports https://developer.mozilla.org/en-US/docs/Web/HTML/Viewport_meta_tag
app = dash.Dash(__name__,
                title='Well Log Viewer',
                meta_tags=[
                    {"name": "viewport", "content": "width=device-width, initial-scale=1"},]
                )

# Our app layout, this is where the HTML is structured for our web page
app.layout = dbc.Container(
    [html.H1('Well Log Viewer', className='twelve columns'),
     html.Hr(),
     #This uploader allows a user to drag and drop or choose a file to be viewed
     dcc.Upload(
        id='upload-data',
        children=html.Div([
            'Drag and Drop or ',
            html.A('Select an .LAS file')
        ]),
        style={
            'width': '100%',
            'height': '60px',
            'lineHeight': '60px',
            'borderWidth': '2px',
            'borderStyle': 'dashed',
            'borderRadius': '5px',
            'textAlign': 'center',
            'margin': '10px'
        },
        # Not currently allowing multiple files to be uploaded
        multiple=False
     ),
     # This section of the HTML will be where our log information is displayed
     html.Div(
            className="section",
            children=[
                html.Div(className="section-title", children="LAS curves"),
                html.Div(
                    className="page",
                    children=[html.Div(id="las-curves", children="")],
                ),
            ],
        ),
     ],
    fluid=True,
)

if __name__ == '__main__':
    app.run_server(debug=False, port=1234)

The code above is the skeleton of our application. If you run this piece of code, Plotly Dash will start a local server where you can see your site. Open a web browser and type http://127.0.0.1:1234/. The upload button will open a file dialog, but nothing will happen when you attempt to load a file. We will program that functionality next.

Adding Functionality

...

@app.callback(Output('las-curves', 'children'),
              Input('upload-data', 'contents'),
              State('upload-data', 'filename'),)
def update_output(list_of_contents, list_of_names):
    if list_of_contents is not None:
 
        children = [
            parse_contents(list_of_contents, list_of_names)]
        return children

...

I will take this step by step, as the Plotly Dash callback methodology may seem confusing at first. The code above is registering a listener on the upload button in our web app. Once the upload button receives a file, the listener knows to execute the code inside of the “update_output” function. If the list of contents passed into our upload button has a file, the next step is to validate and process that file. To keep the logic clean, we will jump into a function called “parse_contents.”

...

# Sourced from https://dash.plotly.com/dash-core-components/upload
def parse_contents(contents, filename):
    content_type, content_string = contents.split(',')
    decoded = base64.b64decode(content_string)
    try:
        if 'csv' in filename:
            # Assume that the user uploaded a CSV file
            df = pd.read_csv(
                io.StringIO(decoded.decode('utf-8')))
        elif 'xls' in filename:
            # Assume that the user uploaded an excel file
            df = pd.read_excel(io.BytesIO(decoded))
        elif '.las' in filename:
            # Assume that the user uploaded an LAS file
            df = lasio.read(decoded.decode('utf-8'))
            # send data to function to generate a graph
            return generate_curves(df)

            
            
    except Exception as e:
        print(e)
        return html.Div([
            'There was an error processing this file.'
        ])

...

Our first check is to look at the file name and make sure it has “.las” in it. Once passing that test, we utilize the Lasio python library to read and access various parts of the LAS structure. The data is read into the variable “df” and sent to another function, “generate_curves”, that will prepare it for display. If there is an error in processing the file, the site will update with a message notifying the user.

Who doesn’t appreciate a nice set of curves?

...

def generate_curves(data):
    lineWidth = 1
    fontSize = 8
    tickFontSize=8
    height=950
    width=800
    bg_color="white"
    
    
    columns = list(data.curves.keys())
    
    
    yvalues = "DEPT"  # Must be "DEPT" or "DEPTH" , based on las 2.0 mnemonics
    
    # Container for the various plots from the LAS Data
    myPlots = []
    
    # For a test, I know this LAS file contains a GR curve mnemonic
    myPlots.append(["GR"])
  
    fig = subplots.make_subplots( 
        rows=1, cols=len(myPlots), shared_yaxes=True, horizontal_spacing=0,
    )
    
    

    # Iterate through all added curves and add to plot
    for i in range(len(myPlots)):
        lineColorCount = 0
        for column in myPlots[i]:
            fig.append_trace(
                go.Scatter(
                    x=data.curves[column].data,
                    y=data.curves[yvalues].data,
                    name=column,
                    #xaxis=altAxis,
                    line={
                        "width": lineWidth,
                        "dash": "dashdot" if column in myPlots[0] else "solid",
                        "color": lineColor,
                    },
                ),
                row=1,
                col=i + 1,
            )
            lineColorCount += 1
    
    
# Adjust the title on the X axis, and the min/max
    fig["layout"]["xaxis{}".format(1)].update(
        title="Gamma Ray",
        range=[0, 150]
    )
    
    
# Reverse the Y track, well logs commonly displayed from 0 to depth
    fig["layout"]["yaxis"].update(
        title="Depth", autorange="reversed",
        )
    

    for axis in fig["layout"]:
        if re.search(r"[xy]axis[0-9]*", axis):
            fig["layout"][axis].update(
                mirror="all",
                automargin=True,
                showline=True,
                title=dict(font=dict(family="Arial, sans-serif", size=fontSize)),
                tickfont=dict(family="Arial, sans-serif", size=tickFontSize),
            )

    
    fig["layout"].update(
        height=height,
        #width=width,
        plot_bgcolor=bg_color,
        paper_bgcolor=bg_color,
        hovermode="y",
        legend={"font": {"size": tickFontSize}},
        margin=go.layout.Margin(r=100, t=100, b=50, l=80, autoexpand=False),
    )
    
    # Add x axis lines, separating tracks
    fig.update_yaxes(showline=True, linewidth=1, linecolor='#929292', gridcolor="#929292")
    fig.update_xaxes(gridcolor="#929292", linewidth=0.5, linecolor='#929292')

    # Return out generated graphic to the user interface
    return dcc.Graph(figure=fig)

...

This function is where the input file is translated into a thing of beauty (hopefully). The curves contained in the uploaded LAS file are put into the “columns” variable. I have hard coded the Y axis to look for a curve with the mnemonic of “DEPT”. If that curve does not exist in the LAS file, this will cause an error in the application. Next, I created a container called “myPlots” that will (in the future) hold several different curves I want to display. For a quick test spin, let’s add the “GR” mnemonic and see what our initial output looks like. The majority of the code beyond adding “GR” is to control the visualization and can be ignored for now.

At this point, you have enough knowledge to manually add curves from your LAS file and stack them in individual tracks. The problem becomes going through many logs with unique curve mnemonics. Next, I will open the choke a little and give our viewer the ability to detect any gamma ray curve in an LAS file.

Taking it Deeper

...

def curve_check(logCurves, curveGroup):
    # Python Sets
    GRCurves = {"GR", "CGGR", "CGR", "DLGR", "ECGR", "EHGR", "HRGR", "GAM", "GAMD", "GAMM", "GAMMA", 
                "GAMMARAY", "GAMNAT", "GCGR", "GCPS", "GRA", "GRAY", "GRC", "GRCD", "GRCF","GRD", 
                "GRDE", "GRDI", "GRG", "GRGC", "GRGM", "GRGM", "GRGS", "GRH", "GRHD", "GRHDIL", 
                "GRI", "GRIN", "GRLL", "GRM", "GRML", "GRN", "GRNC", "GRNP", "GRP", "GRPD", "GRPND", 
                "GRQH", "GRS", "GRSD", "GRSG", "GRSL", "GRT", "GRTO", "GRX", "GRZ", "GSGR", "HGR", "HSGR",
                "LGR", "P01LGR", "P02LGR", "P03LGR", "P06LGR", "PGR", "SGR", "SGRA", "SGRB", "SGRC", 
                "RGR", "HCGR", "GAMMA_RAY", "GAMS", "GRCR", "GRHR", "GRQA", "GRR", "GRV", "MCGR", "GRCN", 
                "GRDP", "GRCC", "GRCG", "GRCL", "GRCM", "GRCP", "GRD4", "GRFB", "GRMN", "GRNB", "GRPO", 
                "GRPR", "GRRF", "TGR", "TOTGR", "GRTOR", "GR1", "GR2", "GRCO", "GRDA", "GRZD", "HHGR", 
                "MGR", "SGRD"
                }
    
    calCurves = {"CALIPER", "CALIP", "CALI", "CAL", "DCAL", "ACAL", "CALA", "CALD", "CALE", "CALH", 
                 "CALL", "CALM", "CALML", "CALN", "CALP", "CALS", "CALT", "CALX", "CALXH", "CALXZ", 
                 "CALY", "CALYH", "CALYHD", "CALYM", "CALYQH", "CALYZ", "CALZ", "CANC", "CANN", 
                 "CAPD", "CAX", "CAY", "CLDC", "CLDM", "CLL0", "CLMR", "CLMS", "CLRM", "CLS2", 
                 "CLTC", "CLXC", "CLXF", "CLYC", "MCAL", "CALXQH", "CLCM", "CR1", "CR2", "CS1M", 
                 "CS2M", "CS3M", "CS4M", "CS5M", "CS6M0", "HCA1", "HCAL", "HCALI", "HCALX", "HCALY", 
                 "XCAL", "YCAL", "CABX", "CABY", "CACN", "CADF", "CADP", "CAMR", "CAXR", "CAYR", 
                 "DCCP", "MLTC", "C1", "C13", "C13A", "C13H", "C24A", "C24", "C24H", "C24I", 
                 "C24L", "C24M", "C24P", "C24Z", "CA", "CA1", "CA2", "CADE", "CAL1", "CAL2", "CAL3", 
                 "CALXM", "CALXQ8", "CALXGH", "CALYQ8", "CLCD", "CLLO", "CQLI", "HD1", "HD2", "HD3", 
                 "HDAR", "HDIA", "HDMI", "HDMN", "HDMX", "HLCA", "LCAL", "SA", "TAC2", "C3", "HHCA", 
                 "MBTC", "TACC", "DZAL"
                 }
    
    #Convert the log curve (python list) to a python set so we can utilize the built in comparison functions
    logSet = set(logCurves)
    
    # This should return a set containing only the curves that are known GR curve mnemonics
    # Reference for python set methods - https://medium.com/better-programming/a-visual-guide-to-set-comparisons-in-python-6ab7edb9ec41
    if curveGroup == "GR":
        return logSet.intersection(GRCurves)
    
...

The code additional above adds a bit of intelligence to the log processing. I have created a python set of the common mnemonics for a gamma ray curve and a caliper tool. Python sets are powerful collections of data that can be used to find intersections and differences between data sets. You can dive deeper into Python sets through this link. The new “curve_check” method will bring in all the curves found within the LAS file and find every curve that is labeled as a gamma ray. Where the LAS data (“logSet” variable) and the defined “GRCurves” set both contain a mnemonic, it will be added to our display in track one.

That wraps up an introduction to processing LAS files in Python. I am leaving it with a lot of room for you to expand and grow. If there is interest, let us know and we can expand into calculated columns and more advanced plotting. The final code for the gif above is below if you’d rather copy-pasta to the quick win.

Follow Us

# -*- coding: utf-8 -*-
"""
Created on Fri Jan  1 21:01:23 2021

@author: @FracLost

Well log viewer

1 - Open a blank page
2 - User imports an .LAS file
3 - LAS file is loaded and visualized
4 - User can customize/process data

References
https://pypi.org/project/lasio/
https://dash.plotly.com/dash-core-components/upload
https://dash.plotly.com/external-resources
https://dash-bootstrap-components.opensource.faculty.ai/docs/quickstart/

"""

import dash
import dash_core_components as dcc
import dash_html_components as html
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output, State
from plotly import subplots
import plotly.graph_objects as go

import base64
import io
import datetime
import re
import pandas as pd
import numpy as np
import lasio

# Track 1 - GR
# Track 2 - Caliper (slim track)


GRColors = ["#00A347","#00EE67","#008038","#39FF8F","#005D28","#83FFB9","#003A19"]
CalColors = ["#000000","#242424", "#494949", "#6D6D6D", "#929292", "#B6B6B6"]

# Initializing our Dash application

# The meta_tags section is used with the dash_bootstrap_components to help control resizing on smaller devices
# Follow this link for more information on viewports https://developer.mozilla.org/en-US/docs/Web/HTML/Viewport_meta_tag
app = dash.Dash(__name__,
                title='Well Log Viewer',
                meta_tags=[
                    {"name": "viewport", "content": "width=device-width, initial-scale=1"},]
                )

def curve_check(logCurves, curveGroup):
    # TALK ABOUT PYTHON SETS HERE... and their usage
    GRCurves = {"GR", "CGGR", "CGR", "DLGR", "ECGR", "EHGR", "HRGR", "GAM", "GAMD", "GAMM", "GAMMA", 
                "GAMMARAY", "GAMNAT", "GCGR", "GCPS", "GRA", "GRAY", "GRC", "GRCD", "GRCF","GRD", 
                "GRDE", "GRDI", "GRG", "GRGC", "GRGM", "GRGM", "GRGS", "GRH", "GRHD", "GRHDIL", 
                "GRI", "GRIN", "GRLL", "GRM", "GRML", "GRN", "GRNC", "GRNP", "GRP", "GRPD", "GRPND", 
                "GRQH", "GRS", "GRSD", "GRSG", "GRSL", "GRT", "GRTO", "GRX", "GRZ", "GSGR", "HGR", "HSGR",
                "LGR", "P01LGR", "P02LGR", "P03LGR", "P06LGR", "PGR", "SGR", "SGRA", "SGRB", "SGRC", 
                "RGR", "HCGR", "GAMMA_RAY", "GAMS", "GRCR", "GRHR", "GRQA", "GRR", "GRV", "MCGR", "GRCN", 
                "GRDP", "GRCC", "GRCG", "GRCL", "GRCM", "GRCP", "GRD4", "GRFB", "GRMN", "GRNB", "GRPO", 
                "GRPR", "GRRF", "TGR", "TOTGR", "GRTOR", "GR1", "GR2", "GRCO", "GRDA", "GRZD", "HHGR", 
                "MGR", "SGRD"
                }
    
    calCurves = {"CALIPER", "CALIP", "CALI", "CAL", "DCAL", "ACAL", "CALA", "CALD", "CALE", "CALH", 
                 "CALL", "CALM", "CALML", "CALN", "CALP", "CALS", "CALT", "CALX", "CALXH", "CALXZ", 
                 "CALY", "CALYH", "CALYHD", "CALYM", "CALYQH", "CALYZ", "CALZ", "CANC", "CANN", 
                 "CAPD", "CAX", "CAY", "CLDC", "CLDM", "CLL0", "CLMR", "CLMS", "CLRM", "CLS2", 
                 "CLTC", "CLXC", "CLXF", "CLYC", "MCAL", "CALXQH", "CLCM", "CR1", "CR2", "CS1M", 
                 "CS2M", "CS3M", "CS4M", "CS5M", "CS6M0", "HCA1", "HCAL", "HCALI", "HCALX", "HCALY", 
                 "XCAL", "YCAL", "CABX", "CABY", "CACN", "CADF", "CADP", "CAMR", "CAXR", "CAYR", 
                 "DCCP", "MLTC", "C1", "C13", "C13A", "C13H", "C24A", "C24", "C24H", "C24I", 
                 "C24L", "C24M", "C24P", "C24Z", "CA", "CA1", "CA2", "CADE", "CAL1", "CAL2", "CAL3", 
                 "CALXM", "CALXQ8", "CALXGH", "CALYQ8", "CLCD", "CLLO", "CQLI", "HD1", "HD2", "HD3", 
                 "HDAR", "HDIA", "HDMI", "HDMN", "HDMX", "HLCA", "LCAL", "SA", "TAC2", "C3", "HHCA", 
                 "MBTC", "TACC", "DZAL"
                 }
    
    #Convert the log curve (python list) to a python set so we can utilize the built in comparison functions
    logSet = set(logCurves)
    
    # This should return a set containing only the curves that are known GR curve mnemonics
    # Reference for python set methods - https://medium.com/better-programming/a-visual-guide-to-set-comparisons-in-python-6ab7edb9ec41
    if curveGroup == "GR":
        return logSet.intersection(GRCurves)
    elif curveGroup == "CAL":
        return logSet.intersection(calCurves)



def generate_curves(data):
    # MNEMONIC REFERENCE https://geoloil.com/LasCurveMnemonicsDictionary.php
    
    lineWidth = 1
    fontSize = 8
    tickFontSize=8
    height=950
    width=800
    bg_color="white"
    
    
    columns = list(data.curves.keys())
    
    yvalues = "DEPT"  # Must be "DEPT" or "DEPTH" , based on las 2.0 mnemonics
    
    # Container for the various plots from the LAS Data
    myPlots = []
    
    #myPlots.append(["GR"])
    
    GRList = curve_check(columns,"GR")
    myPlots.append(list(GRList))
    
    CalList = curve_check(columns,"CAL")
    myPlots.append(list(CalList))
  
    #Using column_widths to make the caliper track smaller than everything
    fig = subplots.make_subplots( 
        rows=1, cols=len(myPlots), shared_yaxes=True, horizontal_spacing=0, column_widths=[0.5, 0.1]
    )
    
    

    for i in range(len(myPlots)):
        lineColorCount = 0
        for column in myPlots[i]:
            if i == 0:
                lineColor=GRColors[lineColorCount]
            elif i == 1:
                lineColor=CalColors[lineColorCount]
            else:
                lineColor="black"
            
            
            fig.append_trace(
                go.Scatter(
                    x=data.curves[column].data,
                    y=data.curves[yvalues].data,
                    name=column,
                    line={
                        "width": lineWidth,
                        "dash": "dashdot" if column in myPlots[0] else "solid",
                        "color": lineColor,
                    },
                ),
                row=1,
                col=i + 1,
            )
            lineColorCount += 1
    
    
    fig["layout"]["xaxis{}".format(1)].update(
        title="Gamma Ray",
        range=[0, 150]
    )
    
    fig["layout"]["xaxis{}".format(2)].update(
        title="Hole Condition",
        range=[4, 12],
    )
    
    fig["layout"]["yaxis"].update(
        title="Depth", autorange="reversed",
        )
    

    for axis in fig["layout"]:
        if re.search(r"[xy]axis[0-9]*", axis):
            fig["layout"][axis].update(
                mirror="all",
                automargin=True,
                showline=True,
                title=dict(font=dict(family="Arial, sans-serif", size=fontSize)),
                tickfont=dict(family="Arial, sans-serif", size=tickFontSize),
            )

    
    fig["layout"].update(
        height=height,
        #width=width,
        plot_bgcolor=bg_color,
        paper_bgcolor=bg_color,
        hovermode="y",
        legend={"font": {"size": tickFontSize}},
        margin=go.layout.Margin(r=100, t=100, b=50, l=80, autoexpand=False),
    )
    
    # Add x axis lines, separating tracks
    fig.update_yaxes(showline=True, linewidth=1, linecolor='#929292', gridcolor="#929292")
    fig.update_xaxes(gridcolor="#929292", linewidth=0.5, linecolor='#929292')

    return dcc.Graph(figure=fig)
    

app.layout = dbc.Container(
    [html.H1('Well Log Viewer', className='twelve columns'),
     html.Hr(),
     dcc.Upload(
        id='upload-data',
        children=html.Div([
            'Drag and Drop or ',
            html.A('Select an .LAS file')
        ]),
        style={
            'width': '100%',
            'height': '60px',
            'lineHeight': '60px',
            'borderWidth': '2px',
            'borderStyle': 'dashed',
            'borderRadius': '5px',
            'textAlign': 'center',
            'margin': '10px'
        },
        # Allow multiple files to be uploaded
        multiple=False
     ),
     html.Div(
            className="section",
            children=[
                html.Div(className="section-title", children="LAS curves"),
                html.Div(
                    className="page",
                    children=[html.Div(id="las-curves", children="")],
                ),
            ],
        ),
     ],
    fluid=True,
)

# Sourced from https://dash.plotly.com/dash-core-components/upload
def parse_contents(contents, filename):
    content_type, content_string = contents.split(',')
    decoded = base64.b64decode(content_string)
    try:
        if 'csv' in filename:
            # Assume that the user uploaded a CSV file
            df = pd.read_csv(
                io.StringIO(decoded.decode('utf-8')))
        elif 'xls' in filename:
            # Assume that the user uploaded an excel file
            df = pd.read_excel(io.BytesIO(decoded))
        elif '.las' in filename:
            # Assume that the user uploaded an LAS file
            df = lasio.read(decoded.decode('utf-8'))
            # send data to function to generate a graph
            
            return generate_curves(df)

            
            
    except Exception as e:
        print(e)
        return html.Div([
            'There was an error processing this file.'
        ])


@app.callback(Output('las-curves', 'children'),
              Input('upload-data', 'contents'),
              State('upload-data', 'filename'),)
def update_output(list_of_contents, list_of_names):
    if list_of_contents is not None:
        children = [
            parse_contents(list_of_contents, list_of_names)]
        return children


if __name__ == '__main__':
    app.run_server(debug=False, port=1234)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: