Initial commit of public repository open_pdks.
diff --git a/common/README b/common/README
new file mode 100644
index 0000000..decfec1
--- /dev/null
+++ b/common/README
@@ -0,0 +1,584 @@
+open_pdks : A system for installing silicon foundry PDKs for open-source EDA tools
+(also maybe works for installing commercial tools)
+
+----------------------------------------------------------------------------------
+
+Written by Tim Edwards 2019 / 2020 for efabless (efabless.com)
+and Open Circuit Design (opencircuitdesign.com)
+
+----------------------------------------------------------------------------------
+
+Introduction:
+
+    Silicon foundry PDKs are notoriously non-standard, and files obtained
+    from the foundry may end up in any possibly configuration of files and
+    folders.  In addition, silicon foundries are notorious among open source
+    EDA tool enthusiasts for supplying user setups for commercial EDA tools
+    and all but ignoring open source EDA tools.  Open_pdks aims to mitigate
+    the problem by defining a standard layout of files and directories for
+    known open standard formats (e.g., SPICE, verilog, liberty, LEF, etc.)
+    and for various open source EDA tools (e.g., magic, netgen, OpenROAD,
+    klayout) using a Makefile system and a number of conversion scripts to
+    ensure that for any process, all files needed by all EDA tools can be
+    found in predictable locations.
+
+    The scripts aim to be as general-purpose as possible to allow easy
+    adaptation to new tools, formats, and foundries.  Where foundry data
+    is intractably unusable, custom install files can be added to overwrite
+    or annotate vendor data as needed.
+
+    Each foundry process is a subdirectory of the open_pdks top level and
+    has its own Makefile.  The typical install process is to cd to the
+    foundry top level and run "make" (see below for details).
+    
+    The general file structure created by open_pdks is as follows:
+
+	<foundry_root>/
+	    <name_of_pdk_variant_1>/
+	    <name_of_pdk_variant_2>/
+	    ...
+	    <name_of_pdk_variant_x>/
+		libs.tech/
+		    <name_of_EDA_tool_1>/
+		    <name_of_EDA_tool_2>/
+		    ...
+		    <name_of_EDA_tool_x>/
+			<EDA_tool_setup_files>
+		libs.ref
+		    <name_of_IP_library_1>/
+		    <name_of_IP_library_2>/
+		    ...
+		    <name_of_IP_library_x>/
+			<name_of_file_format_1>
+			<name_of_file_format_2>
+			...
+			<name_of_file_format_x>
+			    <vendor_files>
+			
+    Note that this format is very general and does not constrain the
+    EDA tools supported or file formats supported, so long as there
+    are scripts in the system to provide that support.  It is intended
+    that open_pdks can be extended as needed to support new tools or
+    new file formats.
+
+    Current EDA tools supported in this version of open_pdks:
+	Tool	    Directory name
+	--------------------------
+	ngspice	    ngspice
+	magic	    magic
+	netgen	    netgen
+	klayout	    klayout
+	qflow	    qflow
+	openlane    openlane
+		
+    Current IP library file formats supported in this version of open_pdks*:
+	Format	    Directory name
+	--------------------------
+	CDL	    cdl
+	SPICE	    spice
+	magic	    mag, maglef
+	LEF	    lef
+	GDS	    gds
+	verilog	    verilog
+	liberty	    lib
+	PDF**	    doc
+
+	(* "Supported" meaning expected/handled by conversion scripts;
+	   as noted, the install is very general purpose and any name
+	   can be used as a target for any vendor or custom files.)
+	(** or HTML or any valid document format, plus supporting files.)
+
+How to use open_pdks:
+
+    There are a seriously limited number of open foundry PDKs.  Those that
+    are known (SkyWater, MOSIS SCMOS) are included in the repository.  In
+    other cases (X-Fab XH035, XH018) it is possible to get an extension to
+    open_pdks from a known trusted source through NDA verification with
+    the foundry.  In all other cases, foundries should be berated until
+    they agree to support the open_pdks format.
+
+    Open_pdks does not attempt to keep any foundry data to the extent
+    possible.  Instead, it adapts to the file structure available from
+    whatever system each foundry uses for downloads.  Each foundry
+    directory should contain a README file that details how to obtain
+    downloads from the foundry, and what files need to be downloaded.
+    Since the download methods vary wildly, it is up to the user to obtain
+    the foundry data as instructed.  The Makefile in the open_pdks foundry
+    directory then needs to be edited to set the correct path to the
+    foundry source data.
+
+    The installation is a bootstrapping process, so needs to be done in
+    stages.  The first stage installs setup files for all the EDA tools.
+    The second stage installs IP libraries (e.g., standard cells, padframe
+    I/O, analog circuits) and depends heavily on the use of the open EDA
+    tools themselves to fill in any missing file formats.  Therefore the
+    tool setup files need to be installed first, and then the IP libraries.
+    If using a distributed install (see below), then the tool setup files
+    need to be installed and distributed (relocated to the final run-time
+    location) before the IP libraries are installed.
+
+    There are two distinct install types supported by open_pdks:
+
+    (1) Local install:  Use a local install when the EDA tools will be run
+    on a single host, and all the PDK data are on the same host.
+
+    The local install sequence is:
+
+	make
+	make install-local		Install EDA tool setup
+	make install-vendor-local	Install IP libraries
+
+    (2) Distributed install:  Use the distributed install when the PDK
+    will be run from multiple hosts, but will be installed into a
+    different location such as a git repo which is then distributed to
+    all hosts, and may not itself reside in the same root directory tree.
+
+    The distributed install sequence is:
+
+	make
+	make install-dist		Install EDA tool setup
+	make install-vendor-dist	Install IP libraries
+
+    Note that local installs may opt to make symbolic links back to the
+    foundry sources, where possible (see options for foundry_install.py,
+    below).  Distributed installs and local installs may also make
+    symbolic links from any PDK variant back to a "master" PDK variant,
+    where possible (that is, where the files are the same).  For example,
+    a standard cell library will probably be compatible with all metal
+    back-end stacks, and so only one copy of all the library files is
+    needed in one of the PDK variants.  For the other PDK variants, the
+    same files are all symbolic links to the files in the first PDK
+    variant.  But an I/O library would have different layouts for different
+    metal back-end stacks, so layout-dependent files like GDS would be
+    different for each PDK, but layout-independent files like verilog
+    might be symbolic links to files in the first PDK.
+
+Prerequisites:
+
+    The following tools/software stacks are needed to run open_pdks:
+
+	python3
+
+	magic	opencircuitdesign.com/magic or github.com/RTimothyEdwards
+
+		assumed to be installed and discoverable in the standard
+		search path as defined by the shell (version 8.2+ required)
+
+How to make or update an open PDK:
+
+    The backbone of the open_pdks system is a set of scripts found in the
+    common/ subdirectory.  The two main scripts are "preproc.py" and
+    "foundry_install.py", with a host of supporting scripts.
+
+    Creating a new PDK starts with generating a Makefile, which can be
+    done by copying a Makefile from an existing project.  The first thing
+    to do is to define the number of PDK variants (usually based on back-end
+    metal stacks available, but can also include front-end options, especially
+    if they are mutually exclusive rather than simply additional masks).
+    Then create the make and make-install targets for local and distributed
+    install, including install (plain), install-vendor, and install-custom.
+    Define the default source and target paths.
+
+    (Needed:  A "make makefile" script that generates the "local" and "dist"
+    automatically, and potentially can also make all the different PDK
+    targets automatically, from a much shorter and simpler master Makefile.)
+
+    Create the basic scripts for tools.  Since foundries do not support open
+    EDA tools, it is inevitable that these files need to be created by hand
+    unless there is an option to import other formats.  Because Magic is used
+    heavily by open_pdks to create missing file formats from other existing
+    file formats, a Magic techfile is critical.  Each of the basic scripts
+    will contain #ifdef ... #endif and similar conditionals to allow the
+    script to be parsed for each target PDK variant.  Each of these scripts
+    is passed through common/preproc.py to handle the conditionals.  Of course,
+    it is possible to make a separate file for each PDK variant as long as the
+    Makefile handles them properly, but use of the preproc.py script allows
+    all the PDK variants to be handled in the same way, simplifying the Makefile.
+
+    --------------------------------------------------------------------------
+    preproc.py Usage:
+
+        preproc.py input_file [output_file] [-D<variable> ...]
+ 
+	  Where <variable> may be a keyword or a key=value pair
+ 
+	  Syntax:  Basically like cpp.  However, this preprocessor handles
+	  only a limited set of keywords, so it does not otherwise mangle
+	  the file in the belief that it must be C code.  Handling of boolean
+	  relations is important, so these are thoroughly defined (see below)
+ 
+	        #if defined(<variable>) [...]
+	        #ifdef <variable>
+	        #ifndef <variable>
+	        #elseif <variable>
+	        #else
+	        #endif
+ 
+	        #define <variable> [...]
+	        #undef <variable>
+ 
+	        #include <filename>
+ 
+	  <variable> may be
+	        <keyword>
+	        <keyword>=<value>
+ 
+	        <keyword> without '=' is effectively the same as <keyword>=1
+	        Lack of a keyword is equivalent to <keyword>=0, in a conditional.
+ 
+	  Boolean operators (in order of precedence):
+	        !       NOT
+	        &&      AND
+	        ||      OR
+ 
+	  Comments:
+	        Most comments (C-like or Tcl-like) are output as-is.  A
+	        line beginning with "###" is treated as a preprocessor
+	        comment and is not copied to the output.
+ 
+	  Examples;
+	        #if defined(X) || defined(Y)
+	        #else
+	        #if defined(Z)
+	        #endif
+
+    --------------------------------------------------------------------------
+
+    The script common/foundry_install.py handles all the IP library processing
+    and installation.  It generates the local directory structure and populates
+    the directories with foundry vendor data, and filters or otherwise uses
+    open EDA tools to generate missing standard file formats or create file
+    formats needed by the open EDA tools.
+
+    foundry_install.py Usage:
+
+	foundry_install.py [option [option_arguments]] ...
+
+	All options begin with "-" and may be followed by one or more
+	arguments (that do not begin with "-").  The foundry_install.py
+	script may be called multiple times, although it is best to
+	group together all files for the installation of an IP library,
+	since the options given will be used to determine what files are
+	missing and need to be generated.
+
+	Global options:
+	    -link_from <type>  
+			    Make symbolic links to vendor files from target
+			    Types are: "none", "source", or a PDK name.
+			    Default "none" (copy all files from source)
+	    -source <path>
+			    Path to source data top level directory
+	    -target <path>
+			    Path to target top level directory
+	    -local <path>
+			    For distributed installs, this is the local
+	                    path to target top level directory.
+
+	    -library <type> <name>
+			    The install target is an IP library with
+			    name <name>.
+	    -ef_format
+			    Use the original efabless format for file
+			    installs.  This has several differences from
+			    then no-efabless install.  The most important
+			    is that the order of directories for IP libraries
+			    is <file_format>/<library_name> instead of
+			    <library_name>/<file_format>.  As the efabless
+			    platform migrates to the open_pdks developing
+			    standard, this use should eventually be
+			    deprecated.  In open_pdks, the option is set
+			    from the EF_FORMAT variable setting in the Makefile.
+
+	All other options represent installation into specific directories.
+	The primary rule is that if foundry_install.py is passed an option
+	"-library" (see syntax below), then all other non-global options
+	represent subdirectories of the IP library, given the same name as
+	the option word following the "-".  If the foundry_install.py command
+	line does not have an option "-library", then all non-global options
+	represent per-EDA tool subdirectories, where the name of the subdirectory
+	is the same as the option word following the "-".
+
+	Each tool install option has the syntax:
+
+		-<tool_name> <path> [<option_arguments>]
+
+	Each IP library install option has the syntax:
+
+		-<file_format_name> <path> [<option_arguments>]
+	
+	 The <path> is a directory path that is relative to the path prefix
+	 given by the -source option.  The path may be wildcarded with the
+	 character "*".  The specific text "/*/" is always replaced by the
+	 name of the IP library (if "-library" is an option).  Otherwise,
+	 "*" has the usual meaning of matching any characters in a name
+	 (see python glob.glob() command for reference).
+
+	 (Note that the INSTALL variable in the Makefile starts with "set -f"
+	 to suppress the OS from doing wildcard substitution;  otherwise the
+	 wildcards in the install options will get expanded by the OS before
+	 being passed to the install script.)
+
+	 In some cases, it may be required to process an option like "compile"
+	 (see below) on files already in the target path without adding any
+	 source files.  In that case, <path> may be any keyword that does not
+	 point to a valid directory;  "none" is a recommended choice.
+
+	 Library option:
+	
+	       -library <type> <name> [<target>]
+	
+	    <type> may be one of the following:
+
+		digital		Digital standard cells
+		primitive	Primitive devices
+		general		All others
+
+		Analog and I/O libraries fall under the category "general".
+
+	    <name> is the vendor name of the library.
+
+	    [<target>] is the (optional) local name of the library.  If omitted,
+	    then the vendor name is used for the target (there is no particular
+	    reason to specify a different local name for a library).
+
+	 Any number of libraries may be supported, and one "-library" option
+	 may be provided for each supported library.  The use of multiple
+	 libraries for a single run of foundry_install.py only works if the
+	 formats (gds, cdl, lef, etc.) happen to all work with the same wildcards.
+	 But it is generally most common to declare only one library name per
+	 call to foundry_install.py.
+	
+	 Common foundry_install.py options when used with "-library":
+	
+	       -techlef <path> [option_arguments]   Technology LEF file
+	       -doc <path> [option_arguments]	    library documentation
+	       -lef <path> [option_arguments]	    LEF file
+	       -spice <path> [option_arguments]	    SPICE netlists
+	       -cdl <path> [option_arguments]	    CDL netlists
+	       -lib <path> [option_arguments]	    Liberty timing files
+	       -gds <path> [option_arguments]	    GDS layout data
+	       -verilog <path> [option_arguments]   Verilog models
+
+	 Any name can be used after the "-" and the installation of files
+	 will be made into a directory of that name, which will be created
+	 if it does not exist.  The names used above are preferred, for
+	 the sake of compatibility between EDA tools.
+
+	 Of special note is "techlef", as technology LEF files are often
+	 associated with a PDK and not an IP library.  In this system,
+	 the technology LEF file should be associated with each standard
+	 cell library for which it is intended.
+
+	 [option_arguments] may be one of the following:
+
+	       up  <number>
+		    Any tool option can use this argument to indicate that
+		    the source hierarchy should be copied entirely, starting
+		    from <number> levels above the files indicated by <path>.
+		    For example, if liberty files are kept in multiple
+		    directories according to voltage level, then
+	
+			-liberty x/y/z/PVT_*/*.lib
+	
+		    would install all .lib files directly into
+		    libs.ref/<libname>/liberty/*.lib while
+	
+			-liberty x/y/z/PVT_*/*.lib up 1
+	
+		    would install all .lib files into
+		    libs.ref/liberty/<libname>/PVT_*/*.lib.
+	
+	       nospec
+	            Remove timing specification before installing (used with
+		    verilog files only;  could be extended to liberty files).
+
+	       compile
+	            Create a single library from all components.  Used when a
+		    foundry library has inconveniently split an IP library
+		    (LEF, CDL, verilog, etc.) into individual files.
+
+	       compile-only
+		    Same as argument "compile", except that the individual
+		    files are not copied to the target;  only the compiled
+		    library is created.
+
+	       stub
+	            Remove contents of subcircuits from CDL and SPICE netlist,
+		    or verilog files.  This is useful to LVS and other tools
+		    to know the order of pins in a circuit (for CDL or SPICE),
+		    or simply to ignore the contents of the file (any format)
+		    so that the circuit in question is treated as a "black box".
+
+	       priv
+	            Mark the contents being installed as privleged, and put
+		    them in a separate root directory libs.priv where they
+		    can be given additional read/write restrictions.
+
+	       filter <script_file_path>
+		    Process all files through the script <script_file_path>,
+		    which is given as a relative path to the directory
+		    containing the Makefile.  The filter script traditionally
+		    is put in local subdirectory custom/scripts/.  The filter
+		    script should be written to take a single argument, which
+		    is the path to a file, and process that file, and overwrite
+		    the file with the result.  Commonly used filters are found
+		    in the common/ directory.  See common/fixspice.py for an
+		    example.
+
+	       noclobber
+		    Mainly diagnostic.  When specified, any temporary files
+		    used during installation will be retained instead of
+		    deleted after use.  This includes, for example, scripts
+		    passed to magic for running extraction or file format
+		    generation.  It is useful when debugging problems with
+		    the install.
+
+	       anno
+		    Currently only supported for LEF files.  This argument
+		    indicates that the vendor LEF files should be used only
+		    for annotating GDS input with port location information,
+		    but the LEF files themselves should not be installed.
+
+	       noconvert
+		    Install files from source to target, but do not perform
+		    any additional conversions (such as CDL to SPICE, or
+		    GDS or LEF to magic).
+
+	       ignore=<keyword>[,...]
+		    Specifically for CDL and SPICE netlists, ignore any
+		    parameter found matching <keyword>
+
+	       rename=<new-name>
+		    For single files copied from source to target, the
+		    target file should be named <new-name> and not be
+		    given the same name as the source file.  When used
+		    with the "compile" or "compile-only" options, then
+		    the compiled file gets the name <new-name> rather
+		    than taking the name of the library.
+
+	       exclude=<file>[,...]
+		    When using "compile" or "compile-only", exclude any
+		    file in the target directory matching the name <file>.
+
+    File conversions handled by foundry_install.py:
+
+	The following file format conversions can be done automatically by
+	foundry_install.py:
+
+	    CDL to SPICE:  A CDL netlist or library can be converted to a
+			   general-purpose SPICE netlist that can be read
+			   by any tool that can read Berkeley SPICE 3f5
+			   syntax.
+
+	    GDS to LEF:	   An abstract view can be generated from a full
+			   layout view using Magic.
+
+	    GDS to SPICE:  In the absence of any netlist, Magic will
+			   extract a SPICE netlist from a full layout.
+
+	    SPICE (any) to SPICE (ngspice):  The fixspice.py script will
+			   attempt to convert any SPICE model file,
+			   cell library, or netlist to a form that is
+			   compatible with ngspice version 30.
+
+    open_pdks additional Makefile notes:
+
+	The "make install-local" ("make install-dist") step is generally
+	broken into individual make sections, one for each tool (e.g.,
+	magic, netgen, klayout).  There is an additional section called
+	"general" which installs a ".config" directory at the PDK top
+	level, containing a file "nodeinfo.json" which has general
+	information about the PDK that may be used by any tool that
+	understands the key:value pairs used in the JSON file.  Keys used
+	are as follows:
+
+		foundry :	Short name of the foundry, equal to the foundry
+				directory root, above the PDK variants.
+		foundry-name :  Long name of the foundry.
+		node :		The name of the PDK variant
+		feature-size :  The foundry process feature size (e.g., 130nm)
+		status :	"active" or "inactive".  May be used by tools
+				to present or hide specific PDK variants.
+		description :	Long text description of the process variant
+				(e.g., 6-metal stack + MiM caps)
+		options :	List of options, corresponding to the definitions
+				used in the Makefile and passed to preproc.py.
+		stdcells :	List of standard cell libraries available for this
+				PDK variant.
+		iocells :	List of I/O pad cell libraries available for this
+				PDK variant.
+
+	Note that the JSON file is, like other EDA tool setup files, usually a
+	master file that is parsed by preproc.py;  therefore when specifying
+	"options", use #undef before specifying each option name so that the
+	option name itself is ignored by the pre-processor.
+	
+
+Goals of the open_pdks project:
+
+    The intended goal of open_pdks is to be able to support as many open source
+    EDA tools as practical, and to be able to generate all needed files for
+    those tools from any sufficiently complete set of vendor files.
+
+    A number of file converions are not available but would be useful to have:
+
+	    SPICE to liberty:   Create timing files by running simulations
+				on SPICE netlists using ngspice.
+
+	    liberty to verilog: Use the function statements in liberty
+				format to create verilog primitives.  Maybe
+				use liberty timing information to generate
+				LEF specify sections.
+
+	    verilog to liberty: Reverse of the above.  Use verilog logic
+				tables and specify sections to generate liberty
+				functions and timing tables.
+
+    File formats that need to be supported:
+
+	    Schematic and symbol:  There are few standards, so either everyone
+				needs to agree on a good format to use, or there
+				needs to be a lot of scripts to do conversions
+				between formats.  Open EDA tools that could be
+				supported include:
+
+				electric, xcircuit, kicad, sue2
+
+    Other open source EDA tools that need to be supported:
+
+	    OpenROAD
+	    Coriolis2
+	    (add more here. . .)
+
+    Commercial EDA tools can potentially be supported under this same system,
+    provided sufficient compatibility with the file system structure.
+
+    Other scripts needed:
+
+	    Project setup script:  It would be useful to define a "standard
+	    project file structure" that is similar to the standard PDK file
+	    structure defined in open_pdks.  The preferred project setup
+	    based on the efabless model is:
+
+		<project_name>
+		    .config/
+			techdir (symbolic link to open_pdks PDK)
+		    project.json    (information file for tools)
+		    <tool_name>	    (magic, qflow, ngspice, etc.) or
+		    <format_name>   (spice, gds, verilog, etc.)
+
+	    In general, <tool_name> directories are intended to be workspaces
+	    for specific EDA tools (and may have their own nested hierarchies;
+	    e.g., qflow/<digital_block>/source,synthesis,layout) while
+	    <format_name> is a place to keep (final) files of a specific format,
+	    with the intention that any project can easily be made into an
+	    IP library and folded into the open_pdks scheme with little effort.
+
+	    The project.json file contains project information that can be used
+	    by a script to build a setup for any EDA tool.  One goal of the
+	    project.json file is to define "datasheet" (documented elsewhere)
+	    that can be used to drive characterization simulations and create
+	    a datasheet for the project.  Field "ip-name" of "datasheet" is
+	    the canonical name of the project, which can be distinguished from
+	    the project directory top-level name, such that the project can be
+	    moved or copied without affecting the tool flows.
diff --git a/common/cdl2spi.py b/common/cdl2spi.py
new file mode 100755
index 0000000..02e3bb7
--- /dev/null
+++ b/common/cdl2spi.py
@@ -0,0 +1,969 @@
+#!/usr/bin/env python3
+"""
+cdl2spi.py : netlist processor
+Copyright (c) 2016, 2020 efabless Corporation.
+All rights reserved.
+
+usage: cdl2spi.py <inCDLfile> [<outSPCfile>] [options...]
+Writes to .spi to outSPCfile, or stdout if no output argument given. Sets exit
+status if there were non-zero errors.  Most errors/warnings are annotated in-line
+in the stdout each before the relevant line.
+"""
+
+import sys, getopt
+import os
+import re
+import textwrap
+
+# Convert linear scale to area scale suffix 
+# (e.g., if linear scale is 1e-6 ('u') then area scales as 1e-12 ('p'))
+
+def getAreaScale(dscale):
+    ascale = ''
+    if dscale == 'm':
+        ascale = 'u'
+    elif dscale == 'u':
+        ascale = 'p'
+    elif dscale == 'n':
+        ascale = 'a'
+    return ascale
+
+# Check nm (instanceName) in the context of sub (subckt): is it used yet?
+# If not used yet, mark it used, and return as-is.
+# Else generate a unique suffixed version, and mark it used, return it.
+# If 1M suffixes don't generate a unique name, throw exception.
+#   hasInm : global hash, key of hash is (subckt, iname)
+
+hasInm = {}
+def uniqInm(sub, nm):
+    subl=sub.lower()
+    nml=nm.lower()
+    if not (subl, nml) in hasInm:
+        hasInm[ (subl, nml) ] = 1
+        return nm
+    for i in range(1000000):
+        nm2 = nm + "_q" + str(i)
+        nm2l = nm2.lower()
+        if not (subl, nm2l) in hasInm:
+            hasInm[ (subl, nm2l) ] = 1
+            return nm2
+    # not caught anywhere, and gives (intended) non-zero exit status
+    raise AssertionError("uniqInm: range overflow for (%s,%s)" % (sub, nm))
+
+# Map illegal characters in an nm (instanceName) in context of sub (subckt).
+# For ngspice, '/' is illegal in instanceNames. Replace it with '|', BUT
+# then make sure the new name is still unique: does not collide with a name
+# used so far or another already derived unique name.
+
+inmBadChars='/'
+inmRplChars='|'
+inmBadCharREX=re.compile( "["+ inmBadChars+"]" )
+
+def mapInm(sub, nm):
+    nm2 = inmBadCharREX.sub(inmRplChars, nm)
+    return uniqInm(sub, nm2)
+
+# Process subckt line (array of tokens). Return new array of tokens.
+# There might be a ' /' in the line that needs to be deleted. It may be standalone ' / ', or
+# butting against the next token. It may be before all pins, after all pins, or between pins.
+# Do not touch / in a parameter assignment expression.
+# Do not touch / embedded in a pinName.
+# May touch / butting front of very first parameter assignment expression.
+# .subckt NM / p1 p2 p3 x=y g=h
+# .subckt NM /p1 p2 p3 x=y g=h
+# .subckt NM p1 p2 / p3 x=y g=h
+# .subckt NM p1 p2 /p3 x=y g=h
+# .subckt NM p1 p2 p3 / x=y g=h
+# .subckt NM p1 p2 p3 /x=y g=h
+# .subckt NM p1 p2 p3 x=y g=(a/b)     (don't touch this /)
+# .subckt NM p1 p2/3/4 p3 x=y g=(a/b) (don't touch these /)
+
+def mapSubcktDef(tok):
+    # find index of one-past first token (beyond ".subckt NM") containing an =, if any
+    param0 = len(tok)
+    for i in range(2, len(tok)):
+        if '=' in tok[i]:
+            param0 = i+1
+            break
+    # find first token before or including that 1st-param, starting with /:
+    #   strip the slash.
+    for i in range(2, param0):
+        if tok[i][0] == '/':
+            tok[i] = tok[i][1:]
+            if tok[i] == "":
+                del tok[i]
+            break
+    return tok
+
+def test_mapSubcktInst1():
+    print( " ".join(mapSubcktDef( ".subckt abc p1 p2 p3".split())))
+    print( " ".join(mapSubcktDef( ".subckt abc / p1 p2 p3".split())))
+    print( " ".join(mapSubcktDef( ".subckt abc /p1 p2 p3".split())))
+    print( " ".join(mapSubcktDef( ".subckt abc p1 p2 /p3".split())))
+    print( " ".join(mapSubcktDef( ".subckt abc p1 p2 / p3".split())))
+    print( " ".join(mapSubcktDef( ".subckt abc p1 p2 p3 x=4 /y=5".split())))
+    print( " ".join(mapSubcktDef( ".subckt abc p1 p2 p3 x=4/2 y=5".split())))
+    print( " ".join(mapSubcktDef( ".subckt abc p1 p2 p3 / x=4/2 y=5".split())))
+    print( " ".join(mapSubcktDef( ".subckt abc p1 p2 p3 x=4/2 /y=5".split())))
+    print( " ".join(mapSubcktDef( ".subckt abc p1 p2 p3 /x=4/2 y=5".split())))
+    print( " ".join(mapSubcktDef( ".subckt abc p1/2/3 p2 p3 /x=4/2 y=5".split())))
+
+# Process subckt instance line (array of tokens). Return new array of tokens.
+# (This function does not map possible illegal-chars in instanceName).
+# There might be a ' /' in the line that needs to be deleted. It may be standalone ' / ', or
+# butting against the next token. It can only be after pins, before or butting subcktName.
+#
+# Do not touch / in, butting, or after 1st parameter assignment expression.
+# Do not touch / embedded in a netName.
+# Do not touch / embedded in instanceName (they are handled separately elsewhere).
+# xa/b/c p1 p2 p3 / NM x=y g=h
+# xa/b/c p1 p2 p3 /NM x=y g=h
+# xabc p1 p2/3/4 p3 /NM x=(a/b) g=h
+# xabc p1 p2/3/4 p3 / NM x=(a/b) g=h
+# xabc p1 p2/3/4 p3 NM x=(a/b) / g=h    (don't touch; perhaps needs to be an error trapped somewhere)
+# xabc p1 p2/3/4 p3 NM / x=(a/b) g=h    (don't touch; perhaps needs to be an error trapped somewhere)
+# xa/b/c p1 p2/3/4 p3 NM x=(a/b) g=h    (don't touch these /)
+
+def mapSubcktInst(tok):
+    # find index of first token (beyond "x<iname>") containing an =, if any
+    param0 = tlen = len(tok)
+    for i in range(1, tlen):
+        if '=' in tok[i]:
+            param0 = i
+            break
+    # Determine modelName index. Either just prior to 1st-param (if any) else last token.
+    modndx = tlen - 1
+    if param0 < tlen:
+        modndx = param0 - 1;
+    # If modndx now points to a standalone /, that can't be (would yield missing/empty modelName).
+    # Actual modelName must be before it. We need to check, possibly strip / on/before actual modelName.
+    # (Even though standlone / after model are most likely an independent error: we don't touch 'em).
+    while modndx > 1 and tok[modndx] == "/":
+        modndx-=1
+    # Check for standalone / before modelName. Else for modelName starting with /.
+    slashndx = modndx - 1
+    if slashndx > 0 and tok[slashndx] == "/":
+        del tok[slashndx]
+    else:
+        if modndx > 0 and tok[modndx].startswith("/"):
+            tok[modndx] = tok[modndx][1:]
+    return tok
+
+def test_mapSubcktInst2():
+    print( " ".join(mapSubcktInst( "xa/b/c p1 p2 p3 / NM x=y g=h".split())))
+    print( " ".join(mapSubcktInst( "xa/b/c p1 p2 p3 /NM x=y g=h".split())))
+    print( " ".join(mapSubcktInst( "xabc p1 p2/3/4 p3 /NM x=(a/b) g=h".split())))
+    print( " ".join(mapSubcktInst( "xabc p1 p2/3/4 p3 / NM x=(a/b) g=h".split())))
+    print( " ".join(mapSubcktInst( "xabc p1 p2/3/4 p3 NM x=(a/b) / g=h".split())))
+    print( " ".join(mapSubcktInst( "xabc p1 p2/3/4 p3 NM / x=(a/b) g=h".split())))
+    print( " ".join(mapSubcktInst( "xabc p1 p2/3/4 p3 /NM / x=(a/b) g=h".split())))
+    print( " ".join(mapSubcktInst( "xabc p1 p2/3/4 p3 / NM / x=(a/b) g=h".split())))
+    print( " ".join(mapSubcktInst( "xa/b/c p1 p2/3/4 p3 NM x=(a/b) g=h".split())))
+    print( " ".join(mapSubcktInst( "xa/b/c NM x=(a/b) g=h".split())))
+    print( " ".join(mapSubcktInst( "xa/b/c / NM x=(a/b) g=h".split())))
+    print( " ".join(mapSubcktInst( "xa/b/c /NM x=(a/b) g=h".split())))
+    print( " ".join(mapSubcktInst( "xa/b/c /NM".split())))
+
+# Primitives with M=<n> need to add additional par1=<n>.
+# Process token list, return new token list.
+# note: line at this point may be like: m... p1 p2 p3 p4 NMOS M=1 $blah W=... L=...
+# meaning M=1 is not necessarily in a block of all parameter-assignments at EOL.
+# But by processing the line from end backwards, we pick up LAST M= if there are
+# multiple (which condition really should  get flagged as an error).
+# And M= is more likely towards end of the line than front of line (thus faster).
+# If "M=" with no value, do nothing (should also be a flagged error).
+
+def mapMfactor(tok, options={}):
+    # find index of M=* if any, starting from end.
+    # "addinm" is an additional parameter that takes the same argument as M
+    addinm = options['addinm'] if 'addinm' in options else []
+    mndx = 0
+    val = ""
+    for i in range(len(tok)-1, 0, -1):
+        if tok[i].lower().startswith("m="):
+            mndx = i
+            break
+    if mndx > 0:
+        val = tok[i][2:]
+    if val != "":
+        for p in addinm:
+            tok += [ addinm + val]
+    return tok
+
+def test_mapMfactor():
+    print( " ".join(mapMfactor( "m1 p1 p2 p3 p4 NM M=joe".split())))
+    print( " ".join(mapMfactor( "m1 p1 p2 p3 p4 NM M= $SUB=agnd".split())))
+    print( " ".join(mapMfactor( "m1 p1 p2 p3 p4 NM M=2 $SUB=agnd WM=4".split())))
+    print( " ".join(mapMfactor( "m1 p1 p2 p3 p4 NM".split())))
+
+# From $nm=... strip the $. Preserve order on the line. No attempt to
+# detect any resultant collisions. "W=5 $W=10" becomes "W=5 W=10".
+# Don't touch $SUB=... or $[...] or $.model=... or $blah (no assigment).
+
+def mapCDLparam(tok):
+    for i in range(1, len(tok)):
+        if not tok[i].startswith("$"):
+            continue
+        eqi = tok[i].find("=")
+        if eqi > 1:
+            pnm = tok[i][1:eqi]
+            pnml = pnm.lower()
+            if pnml in ("sub",".model"):
+                continue
+            tok[i] = tok[i][1:]
+    return tok
+
+def test_CDLparam():
+    print( " ".join(mapCDLparam( "m1 p1 p2 p3 p4 NM M=joe".split())))
+    print( " ".join(mapCDLparam( "m1 p1 p2 p3 p4 NM M= $SUB=agnd $.model=NM3 $LDD".split())))
+    print( " ".join(mapCDLparam( "m1 p1 p2 p3 p4 NM M= $SUB=agnd $[NM3]".split())))
+    print( " ".join(mapCDLparam( "m1 p1 p2 p3 p4 NM M=joe $X=y".split())))
+    print( " ".join(mapCDLparam( "m1 p1 p2 p3 p4 NM M= $SUB=agnd $.model=NM3 $Z=4 $Z=5".split())))
+    print( " ".join(mapCDLparam( "m1 p1 p2 p3 p4 NM M= W=1 $W=2 W=3 $SUB=agnd $[NM3]".split())))
+
+# Extract $SUB=<tname>. and $[mnm] (or $.model=<mnm>) from tokens.
+# Return array of three items: [ <tname>, <mnm>, tok ] where tok is remainder.
+# Absent $SUB= or model directives give "".
+# Since we delete tokens, process tokens in reverse order.
+
+def mapCDLtermModel(tok):
+    cdlTerm=""
+    cdlModel=""
+    for i in range(len(tok)-1, 0, -1):
+        if not tok[i].startswith("$"):
+            continue
+        tokl = tok[i].lower()
+        if tokl.startswith("$sub="):
+            if cdlTerm == "":
+                cdlTerm = tok[i][5:]
+            del tok[i]
+            continue
+        if tokl.startswith("$.model="):
+            if cdlModel == "":
+                cdlModel = tok[i][8:]
+            del tok[i]
+            continue
+        if tokl.startswith("$[") and tokl.endswith("]"):
+            if cdlModel == "":
+                cdlModel = tok[i][2:-1]
+            del tok[i]
+            continue
+    return [ cdlTerm, cdlModel, tok ]
+
+def test_CDLtermModel():
+    print( mapCDLtermModel( "m1 p1 p2 p3 p4 NM M=joe".split()))
+    print( mapCDLtermModel( "m1 p1 p2 p3 p4 NM $SUB=agnd".split()))
+    print( mapCDLtermModel( "m1 p1 p2 p3 p4 NM $SUB= $[PMOS] M=joe".split()))
+    print( mapCDLtermModel( "m1 p1 p2 p3 p4 NM $sUb=vssa $.MoDeL=PM4 M=joe".split()))
+
+# Determine if a single word looks like a plain numeric spice value.
+# It means a real-number with optional scale suffix, and optional unit suffix.
+# Only unit-suffix we support is m (meters) (because CDL-std describes it).
+# Only scale factors supported are: t,g,meg,k,mil,m,u,n,p,f
+# This does not arithmetically compute anything.
+# Just returns True or False.
+# 220p 10nm -40g 2milm .34e+3 3.1e-4 .34e+3pm 3.1e-4meg
+# (Arguable we should strip a unit-suffix)?
+# def isPlainNumeric(word):
+
+# Segregate any remaining $* items from input tokens.
+# Return [ assignments, directives, remaining ] where each are lists.
+# Those that look like assigments $nm=... are separated from $blah.
+
+def mapDiscard(tok):
+    tlen = len(tok)
+    assign=[]
+    directive=[]
+    for i in range(len(tok)-1, 0, -1):
+        if not tok[i].startswith("$"):
+            continue
+        if "=" in tok[i]:
+            assign += [ tok[i] ]
+            del tok[i]
+            continue
+        directive += [ tok[i] ]
+        del tok[i]
+    return [ assign, directive, tok ]
+
+def test_mapDiscard():
+    print( mapDiscard( "m1 p1 p2 p3 p4 NM $X=4 $LDD M=joe $SUB=agnd ".split()))
+    print( mapDiscard( "m1 p1 p2 p3 p4 NM $X $LDD M=joe $SUB=agnd ".split()))
+    print( mapDiscard( "m1 p1 p2 p3 p4 NM M=joe SUB=agnd ".split()))
+
+# From a token-slice, partition into assignments and non-assignments.
+# Return [ assigns, nonAssigns] where each are lists.
+
+def mapPartAssign(tok):
+    tlen = len(tok)
+    assign=[]
+    nona=[]
+    for i in range(len(tok)):
+        if "=" in tok[i]:
+            assign += [ tok[i] ]
+            continue
+        nona += [ tok[i] ]
+    return [ assign, nona ]
+
+def test_mapPartAssign():
+    print( mapPartAssign( "NM X=4 220nm -1.2e-5g LDD M=joe".split()))
+    print( mapPartAssign( "X=4 M=joe".split()))
+    print( mapPartAssign( "NM 220nm -1.2e-5g LDD".split()))
+    print( mapPartAssign( "".split()))
+
+# Find an assignment to nm in the token list (nm=val).
+# Return [val, tok]. If edit is True, the nm=val is removed from return tok.
+# If multiple nm=... the last one is used. If del is True, all nm=... are removed.
+
+def mapLookup(tok, nm, edit):
+    tlen = len(tok)
+    val=""
+    nmeq = nm.lower() + "="
+    nmeqlen = len(nmeq)
+    for i in range(len(tok)-1, 0, -1):
+        if not tok[i].lower().startswith(nmeq):
+            continue
+        if val == "":
+            val = tok[i][nmeqlen:]
+        if edit:
+            del tok[i]
+    return [ val, tok ]
+
+def test_mapLookup():
+    print( mapLookup( "cnm t1 t2 area=220p PERimeter=100u M=joe par1=1".split(), "periMETER", True))
+    print( mapLookup( "m1 p1 p2 p3 p4 NM $X=4 $LDD M=joe $SUB=agnd ".split(), "x", True))
+    print( mapLookup( "m1 p1 p2 p3 p4 NM X=4 $LDD M=joe $SUB=agnd ".split(), "x", True))
+    print( mapLookup( "m1 p1 p2 p3 p4 NM x=4 $LDD M=joe $SUB=agnd ".split(), "x", True))
+    print( mapLookup( "m1 p1 p2 p3 p4 NM x=4 X=5 xy=6 $LDD M=joe $SUB=agnd ".split(), "x", True))
+    print( mapLookup( "m1 p1 p2 p3 p4 NM x=4 X=5 xy=6 $LDD M=joe $SUB=agnd ".split(), "x", False))
+
+# Format a diode. cdlTerm and cdlModel are passed in but ignored/unused.
+# Processes tok and returns a final token list to be output.
+# If after "dnm t1 t2 modelName ", there are plain numerics (index 4,5), take them as area and peri,
+# (override other area= peri= parameters), format them as area=... peri=...
+# (Caller already error checked the 1st minimum FOUR fields are there).
+
+def mapDiode(cdlTerm, cdlModel, tok, options={}):
+    ignore = options['ignore'] if 'ignore' in options else []
+    # strip remaining $* directives
+    [ ign, ign, tok ] = mapDiscard(tok)
+    # Find explicit area= peri=, remove from tok.
+    [area,  tok] = mapLookup(tok, "area",  True)
+    [peri, tok] = mapLookup(tok, "peri", True)
+    for p in ignore:
+        [ign, tok] = mapLookup(tok, p, True)
+    # For just token-slice after modelName, partition into assignments and non-assigns.
+    [assign, nona] = mapPartAssign(tok[4:])
+    tok = tok[0:4]
+    # TODO: If we have more than two non-assignments it should be an error?
+    # Override area/peri with 1st/2nd non-assigment values.
+    if len(nona) > 0:
+        area = nona.pop(0)
+    if len(nona) > 0:
+        peri = nona.pop(0)
+    if area != "":
+        tok += [ "area=" + area ]
+    if peri != "":
+        tok += [ "peri=" + peri ]
+    tok += nona
+    tok += assign
+    return tok
+
+def test_mapDiode():
+    print( mapDiode( "", "", "dnm t1 t2 DN 220p 100u M=joe par1=1".split()))
+    print( mapDiode( "", "", "dnm t1 t2 DN peri=100u area=220p M=joe par1=1".split()))
+    print( mapDiode( "", "", "dnm t1 t2 DN  M=joe par1=1".split()))
+
+# Format a mosfet. cdlTerm and cdlModel are passed in but ignored/unused.
+# Processes tok and returns a final token list to be output.
+# If after "mnm t1 t2 t3 t4 modelName ", there are plain numerics (index 6,7), take them as W and L,
+# (override other W= L= parameters), format them as W=... L=...
+# (Caller already error checked the 1st minimum SIX fields are there).
+
+def mapMos(cdlTerm, cdlModel, tok, options={}):
+    ignore = options['ignore'] if 'ignore' in options else []
+    # strip remaining $* directives
+    [ ign, ign, tok ] = mapDiscard(tok)
+    # Find explicit W= L=, remove from tok.
+    [w, tok] = mapLookup(tok, "w",  True)
+    [l, tok] = mapLookup(tok, "l", True)
+    for p in ignore:
+        [ign, tok] = mapLookup(tok, p, True)
+    # For scaling, find AS, PS, AD, PD, SA, SB, SC, and SD
+    [sarea, tok] = mapLookup(tok, "as",  True)
+    [darea, tok] = mapLookup(tok, "ad",  True)
+    [sperim, tok] = mapLookup(tok, "ps",  True)
+    [dperim, tok] = mapLookup(tok, "pd",  True)
+    [sa, tok] = mapLookup(tok, "sa",  True)
+    [sb, tok] = mapLookup(tok, "sb",  True)
+    [sd, tok] = mapLookup(tok, "sd",  True)
+
+    dscale = options['dscale'] if 'dscale' in options else ''
+    ascale = getAreaScale(dscale)
+
+    # For just token-slice after modelName, partition into assignments and non-assigns.
+    [assign, nona] = mapPartAssign(tok[6:])
+    tok = tok[0:6]
+    # TODO: If we have more than two non-assignments it should be an error?
+    # Override W/L with 1st/2nd non-assigment values.
+    if len(nona) > 0:
+        w = nona.pop(0)
+    if len(nona) > 0:
+        l = nona.pop(0)
+    if w != "":
+        tok += ["W=" + w + dscale]
+    if l != "":
+        tok += ["L=" + l + dscale]
+    if darea != "":
+        tok += ["AD=" + darea + ascale]
+    if sarea != "":
+        tok += ["AS=" + sarea + ascale]
+    if dperim != "":
+        tok += ["PD=" + dperim + dscale]
+    if sperim != "":
+        tok += ["PS=" + sperim + dscale]
+    if sa != "":
+        tok += ["SA=" + sa + dscale]
+    if sb != "":
+        tok += ["SB=" + sb + dscale]
+    if sd != "":
+        tok += ["SD=" + sd + dscale]
+    tok += nona
+    tok += assign
+    return tok
+
+def test_mapMos():
+    print( mapMos( "", "", "mnm t1 t2 t3 t4 NM 220p 100u M=joe par1=1".split()))
+    print( mapMos( "", "", "mnm t1 t2 t3 t4 NM L=100u W=220p M=joe par1=1".split()))
+    print( mapMos( "", "", "mnm t1 t2 t3 t4 PM M=joe par1=1".split()))
+
+# Format a cap.
+# Processes tok and returns a final token list to be output.
+# Optional cdlTerm adds a 3rd terminal.
+# If after "cnm t1 t2 ", there is plain numeric or C=numeric they are DISCARDED.
+# area/peri/perimeter assignments are respected. Both peri/perimeter assign to perm=
+# in the output. No perimeter= appears in the output.
+# (Caller already error checked the 1st minimum 3 fields are there; plus cdlModel is non-null).
+
+def mapCap(cdlTerm, cdlModel, tok, options={}):
+    ignore = options['ignore'] if 'ignore' in options else []
+    # strip remaining $* directives
+    [ ign, ign, tok ] = mapDiscard(tok)
+    # Find explicit area= peri= perimeter=, remove from tok. peri overwrites perimeter,
+    # both assign to perim. Lookup/discard a C=.
+    [area,  tok] = mapLookup(tok, "area",  True)
+    [perim,  tok] = mapLookup(tok, "perimeter", True)
+    [length,  tok] = mapLookup(tok, "l",  True)
+    [width,  tok] = mapLookup(tok, "w",  True)
+    [peri, tok] = mapLookup(tok, "peri", True)
+    if peri == "":
+        peri = perim
+    [ign, tok] = mapLookup(tok, "c", True)
+    for p in ignore:
+        [ign, tok] = mapLookup(tok, p, True)
+    # For just token-slice after modelName, partition into assignments and non-assigns.
+    # We ignore the nonassignments. Need remaining assignments for M= par1=.
+    [assign, nona] = mapPartAssign(tok[3:])
+    dscale = options['dscale'] if 'dscale' in options else ''
+    ascale = getAreaScale(dscale)
+    tok = tok[0:3]
+    if cdlTerm != "":
+        tok += [ cdlTerm ]
+    if cdlModel != "":
+        tok += [ cdlModel ]
+    if area != "":
+        tok += [ "area=" + area + ascale]
+    if peri != "":
+        tok += [ "peri=" + peri + dscale]
+    if length != "":
+        tok += [ "L=" + length + dscale]
+    if width != "":
+        tok += [ "W=" + width + dscale]
+    tok += assign
+    return tok
+
+def test_mapCap():
+    print( mapCap( "", "CPP", "cnm t1 t2 area=220p peri=100u M=joe par1=1".split()))
+    print( mapCap( "", "CPP", "cnm t1 t2 area=220p perimeter=100u M=joe par1=1".split()))
+    print( mapCap( "", "CPP", "cnm t1 t2 area=220p peri=199u perimeter=100u M=joe par1=1".split()))
+    print( mapCap( "", "CPP", "cnm t1 t2 M=joe par1=1".split()))
+    print( mapCap( "", "CPP", "cnm t1 t2 C=444 area=220p peri=199u perimeter=100u M=joe par1=1".split()))
+    print( mapCap( "", "CPP", "cnm t1 t2 444 M=joe par1=1".split()))
+    print( mapCap( "agnd", "CPP2", "cnm t1 t2 $LDD 220p M=joe par1=1".split()))
+
+# Format a res.
+# Processes tok and returns a final token list to be output.
+# Optional cdlTerm adds a 3rd terminal.
+# If after "rnm t1 t2 ", there is plain numeric or R=numeric they are DISCARDED.
+# W/L assignments are respected.
+# (Caller already error checked the 1st minimum 3 fields are there; plus cdlModel is non-null).
+
+def mapRes(cdlTerm, cdlModel, tok, options={}):
+    dscale = options['dscale'] if 'dscale' in options else ''
+    ignore = options['ignore'] if 'ignore' in options else []
+    # strip remaining $* directives
+    [ ign, ign, tok ] = mapDiscard(tok)
+    # Find explicit w/l, remove from tok.
+    # Lookup/discard a R=.
+    [w,  tok] = mapLookup(tok, "w",  True)
+    [l,  tok] = mapLookup(tok, "l", True)
+    [r, tok] = mapLookup(tok, "r", True)
+    for p in ignore:
+        [ign, tok] = mapLookup(tok, p, True)
+    # For just token-slice after modelName, partition into assignments and non-assigns.
+    # We ignore the nonassignments. Need remaining assignments for M= par1=.
+    [assign, nona] = mapPartAssign(tok[3:])
+    if len(nona) > 0:
+        r = nona.pop(0)
+    tok = tok[0:3]
+    if cdlTerm != "":
+        tok += [ cdlTerm ]
+    if cdlModel != "":
+        tok += [ cdlModel ]
+    if w != "":
+        tok += [ "W=" + w + dscale]
+    if l != "":
+        tok += [ "L=" + l + dscale]
+    # Convert name "short" to zero resistance
+    if r == "short":
+        tok += [ "0" ]
+    tok += assign
+    return tok
+
+def test_mapRes():
+    print( mapRes( "", "RPP1", "rnm t1 t2 w=2 L=1 M=joe par1=1".split()))
+    print( mapRes( "", "RPP1", "rnm t1 t2 444 w=2 L=1 M=joe par1=1".split()))
+    print( mapRes( "", "RPP1", "rnm t1 t2 R=444 w=2 L=1 M=joe par1=1".split()))
+    print( mapRes( "", "R2", "rnm t1 t2 L=2 W=10 M=joe par1=1".split()))
+    print( mapRes( "", "RM2", "rnm t1 t2 area=220p perim=199u perimeter=100u M=joe par1=1".split()))
+    print( mapRes( "", "RM2", "rnm t1 t2 M=joe par1=1".split()))
+    print( mapRes( "agnd", "RM3", "rnm t1 t2 $LDD 220p M=joe par1=1".split()))
+    print( mapRes( "agnd", "RM3", "rnm t1 t2 $LDD 220p L=4 W=12 M=joe par1=1".split()))
+
+# Format a bipolar. cdlTerm is optional. cdlModel is ignored.
+# Processes tok and returns a final token list to be output.
+# Optional cdlTerm adds an optional 4th terminal.
+# If after "qnm t1 t2 t3 model", there are plain numeric (not x=y) they are DISCARDED.
+# (Caller already error checked the 1st minimum 5 fields are there; plus cdlModel is null).
+
+def mapBipolar(cdlTerm, cdlModel, tok, options={}):
+    # strip remaining $* directives
+    ignore = options['ignore'] if 'ignore' in options else []
+    [ ign, ign, tok ] = mapDiscard(tok)
+    for p in ignore:
+        [ign, tok] = mapLookup(tok, p, True)
+    # For just token-slice after modelName, partition into assignments and non-assigns.
+    # We ignore the nonassignments. Need remaining assignments for M= par1=.
+    [assign, nona] = mapPartAssign(tok[5:])
+    # Start with "qnm t1 t2 t3". Insert optional 4th term. Then insert modelName.
+    model = tok[4]
+    tok = tok[0:4]
+    if cdlTerm != "":
+        tok += [ cdlTerm ]
+    tok += [ model ]
+    tok += assign
+    return tok
+
+def test_mapBipolar():
+    print( mapBipolar( "", "any", "qnm t1 t2 t3 QP1 M=joe par1=1".split()))
+    print( mapBipolar( "", "", "qnm t1 t2 t3 QP2 M=joe par1=1".split()))
+    print( mapBipolar( "", "", "qnm t1 t2 t3 QP2 $EA=12 M=joe par1=1".split()))
+    print( mapBipolar( "", "", "qnm t1 t2 t3 QP3 M=joe EA=14 par1=1".split()))
+    print( mapBipolar( "agnd", "", "qnm t1 t2 t3 QP4 $LDD 220p M=joe par1=1".split()))
+    print( mapBipolar( "agnd", "any", "qnm t1 t2 t3 QP4 $LDD 220p L=4 W=12 M=joe par1=1".split()))
+
+#------------------------------------------------------------------------
+# Main routine to do the conversion from CDL format to SPICE format
+#------------------------------------------------------------------------
+
+def cdl2spice(fnmIn, fnmOut, options):
+
+    err = 0
+    warn = 0
+
+    # Open and read input file
+
+    try:
+        with open(fnmIn, 'r') as inFile:
+            cdltext = inFile.read()
+            # Unwrap continuation lines
+            lines = cdltext.replace('\n+', ' ').splitlines()
+    except:
+        print('cdl2spi.py: failed to open ' + fnmIn + ' for reading.', file=sys.stderr)
+        return 1
+
+    # Loop over original CDL:
+    #   record existing instanceNames (in subckt-context), for efficient membership
+    #   tests later.  Track the subckt-context, instanceNames only need to be unique
+    #   within current subckt.
+
+    sub = ""
+    for i in lines:
+        if i == "":
+            continue
+        tok = i.split()
+        tlen = len(tok)
+        if tlen == 0:
+            continue
+        t0 = tok[0].lower()
+        if t0 == '.subckt' and tlen > 1:
+            sub = tok[1].lower()
+            continue
+        if t0 == '.ends':
+            sub = ""
+            continue
+        c0 = tok[0][0].lower()
+        if c0 in '.*':
+            continue
+        # this will ignore primitive-devices (jfet) we don't support.
+        # TODO: flag them somewhere else as an ERROR.
+        if not c0 in primch2:
+            continue
+        # a primitive-device or subckt-instance we care about and support
+        # For subckt-instances record the instanceName MINUS lead x.
+        nm = tok[0]
+        if c0 == 'x':
+            nm = nm[1:]
+        hasInm[ (sub, nm) ] = 1
+
+
+    # loop over original CDL: do conversions.
+    # Track the subckt-context while we go; instanceNames only need to be unique
+    # within current subckt.
+
+    sub = ""
+    tmp = []
+    for i in lines:
+        tok = i.split()
+        tlen = len(tok)
+        # AS-IS: empty line or all (preserved) whitespace
+        if tlen == 0:
+            tmp += [ i ]
+            continue
+
+        # get 1st-token original, as lowercase, and 1st-char of 1st-token lowercase.
+        T0 = tok[0]
+        t0 = T0.lower()
+        c0 = t0[0]
+
+        # AS-IS: comment
+        if c0 == '*':
+            tmp += [i]
+            continue
+
+        # AS-IS: .ends; update subckt-context to outside-of-a-subckt
+        if t0 == '.ends':
+            sub = ""
+            tmp += [i]
+            continue
+
+        # change .param to a comment, output it
+        if t0 == '.param':
+            tmp += ["*"+i]
+            continue
+
+        # track .subckt context; process / in .subckt line, and output it.
+        if t0 == '.subckt':
+            if tlen < 2:
+                err+=1
+                msg = "*cdl2spi.py: ERROR: Missing subckt name:"
+                tmp += [ msg, i ]
+                continue
+            T1 = tok[1]
+            sub = T1.lower()
+            tok = mapSubcktDef(tok)
+            tmp += [ " ".join(tok) ]
+            continue
+
+        # subckt instance line. Process /, map instanceName (exclude x), and output it.
+        if c0 == 'x':
+            nm = T0[1:]
+            if nm == "":
+                err+=1
+                msg = "*cdl2spi.py: ERROR: Missing subckt instance name:"
+                tmp += [ msg, i ]
+                continue
+            inm = mapInm(sub, nm)
+            tok[0] = T0[0] + inm
+            tok = mapSubcktInst(tok)
+            tmp += [ " ".join(tok) ]
+            continue
+
+        # all primitives: need instanceName mapped, including 1st char in name.
+        # all primitives: need M=n copied to an added par1=n
+        # all primitives: Except for $SUB=... $[...] strip $ from $nm=... parameters.
+        # all primitives: Isolate $SUB and $[...] for further processing (in
+        # primitive-specific sections).
+
+        cdlTerm=""
+        cdlModel=""
+        if c0 in primch:
+            nm = T0[1:]
+            if nm == "":
+                err+=1
+                msg = "*cdl2spi.py: ERROR: Missing primitive instance name:"
+                tmp += [ msg, i ]
+                continue
+            nm = T0
+            nm = mapInm(sub, nm)
+            tok[0] = nm
+            tok = mapMfactor(tok, options)
+            tok = mapCDLparam(tok)
+            [cdlTerm, cdlModel, tok] = mapCDLtermModel(tok)
+
+        # diode formats:
+        #   dname t1 t2 model <numericA> <numericP> m=...
+        # l:dname t1 t2 model {<numericA>} {<numericP>} {m=...} {$SUB=...}
+        # out format:
+        #   Xdname t1 t2 model area=<numericA> peri=<numericP> m=... par1=...
+        # We flag $SUB=... : because so far (for XFAB) we CHOOSE not to support three
+        # terminal diodes.
+        # CDL-std does not define $[...] as available for diodes, so we silently ignore
+        # it.
+        # Always 2 terminals and a modelName. 
+        # We already have peri=... and area=... and have ambiguity with plain numerics.
+        # TODO: generate a warning in case of ambiguity, but prefer plain numerics
+        # (with nm= added).
+
+        if c0 == "d":
+            tlen = len(tok)
+            if tlen < 4:
+                err+=1
+                msg = "*cdl2spi.py: ERROR: Diode does not have minimum two terminals and model:"
+                tmp += [ msg, i ]
+                continue
+            if cdlTerm != "":
+                err+=1
+                msg = "*cdl2spi.py: ERROR: Diode does not support $SUB=...:"
+                tmp += [ msg, i ]
+                continue
+            tok = mapDiode(cdlTerm, cdlModel, tok, options)
+            # add X to tok0.
+            if options['subckt']:
+                tok[0] = "X" + tok[0]
+            tmp += [ " ".join(tok) ]
+            continue
+
+        # mosfet formats:
+        #   mname t1 t2 t3 t4 model W=... L=... m=...
+        # l:mname t1 t2 t3 t4 model {W=... L=...} {m=...} {$NONSWAP} {$LDD[type]}
+        # l:mname t1 t2 t3 t4 model <width> <length> {m=...} {$NONSWAP} {$LDD[type]}
+        # output format:
+        #   Xmname t1 t2 t3 t4 model W=... L=... m=... par1=...
+        # Fixed 4 terminals and a modelName.
+        # May already have W= L= and ambiguity with plain numerics.
+        # TODO: generate a warning in case of ambiguity, but prefer plain numerics
+        # (with nm= added).
+        if c0 == "m":
+            tlen = len(tok)
+            if tlen < 6:
+                err+=1
+                msg = "*cdl2spi.py: ERROR: Mosfet does not have minimum four terminals and model:"
+                tmp += [ msg, i ]
+                continue
+            if cdlTerm != "":
+                err+=1
+                msg = "*cdl2spi.py: ERROR: Mosfet does not support $SUB=...:"
+                tmp += [ msg, i ]
+                continue
+            tok = mapMos(cdlTerm, cdlModel, tok, options)
+            # add X to tok0.
+            if options['subckt']:
+                tok[0] = "X" + tok[0]
+            tmp += [ " ".join(tok) ]
+            continue
+
+        # cap formats:
+        #  cname t1 t2   <numeric0> $[model] $SUB=t3 m=...
+        #  cname t1 t2   <numeric0> $[model] m=...
+        #? cname t1 t2 C=<numeric0> $[model] $SUB=t3 m=...
+        #? cname t1 t2   <numeric0> $[model] $SUB=t3 area=<numericA> perimeter=<numericP> m=...
+        #? cname t1 t2   <numeric0> $[model] $SUB=t3 area=<numericA> peri=<numericP> m=...
+        #l:cname t1 t2  {<numeric0>} {$[model]} {$SUB=t3} {m=...}
+        # out formats:
+        #  Xcname t1 t2    model area=<numericA> peri=<numericP> m=... par1=...
+        #  Xcname t1 t2 t3 model area=<numericA> peri=<numericP> m=... par1=...
+        # We require inm, two terminals. Require $[model]. Optional 3rd-term $SUB=...
+        # If both peri and perimeter, peri overrides.
+        # Both area/peri are optional. The optional [C=]numeric0 is discarded always.
+
+        if c0 == "c":
+            tlen = len(tok)
+            if tlen < 3:
+                err+=1
+                msg = "*cdl2spi.py: ERROR: Cap does not have minimum two terminals:"
+                tmp += [ msg, i ]
+                continue
+            if cdlModel == "":
+                err+=1
+                msg = "*cdl2spi.py: ERROR: Cap missing required $[<model>] directive:"
+                tmp += [ msg, i ]
+                continue
+            tok = mapCap(cdlTerm, cdlModel, tok, options)
+            # add X to tok0.
+            if options['subckt']:
+                tok[0] = "X" + tok[0]
+            tmp += [ " ".join(tok) ]
+            continue
+
+        # res formats:
+        #   rname n1 n2   <numeric> $SUB=t3 $[model] $w=... $l=... m=...
+        # c:rname n1 n2 R=<numeric> $[model] w=... l=... m=... $SUB=t3 
+        # l:rname n1 n2   {<numeric>} {$SUB=t3} {$[model]} {$w=...} {$l=...} {m=...}
+        #  (all after n1,n2 optional)
+        #    We require $[model]. And add 3rd term IFF $SUB=.
+        # out format:
+        #   Xrname n1 n2 t3 model w=... l=... m=... par1=...
+        if c0 == "r":
+            tlen = len(tok)
+            if tlen < 3:
+                err+=1
+                msg = "*cdl2spi.py: ERROR: Res does not have minimum two terminals:"
+                tmp += [ msg, i ]
+                continue
+            if cdlModel == "":
+                err+=1
+                msg = "*cdl2spi.py: ERROR: Res missing required $[<model>] directive:"
+                tmp += [ msg, i ]
+                continue
+            tok = mapRes(cdlTerm, cdlModel, tok, options)
+            # add X to tok0.
+            if options['subckt']:
+                tok[0] = "X" + tok[0]
+            tmp += [ " ".join(tok) ]
+            continue
+
+        # bipolar formats:
+        #   qname n1 n2 n3 model <numeric> M=... $EA=...
+        #   qname n1 n2 n3 model $EA=... <numeric> M=... 
+        #   qname n1 n2 n3 model {$EA=...} {$W=...} {$L=...} {$SUB=...} {M=...}
+        # No: l:qname n1 n2 n3 {nsub} model {$EA=...} {$W=...} {$L=...} {$SUB=...} {M=...}
+        #   CDL-std adds {nsub} way to add substrate before model: We don't support it.
+        #   Add 3rd term IFF $SUB=. We propagate optional W/L (or derived from $W/$L).
+        #   EA is emitterSize; not supported by XFAB: deleted.
+        #   We require 3-terminals and model. It is an error to specify $[model].
+        #
+        # out format:
+        #   Xqname n1 n2 n3 model M=... par1=...
+        if c0 == "q":
+            tlen = len(tok)
+            if tlen < 5:
+                err+=1
+                msg = "*cdl2spi.py: ERROR: Bipolar does not have minimum three terminals and a model:"
+                tmp += [ msg, i ]
+                continue
+            if cdlModel != "":
+                err+=1
+                msg = "*cdl2spi.py: ERROR: Bipolar does not support $[<model>] directive:"
+                tmp += [ msg, i ]
+                continue
+            tok = mapBipolar(cdlTerm, cdlModel, tok, options)
+            # add X to tok0.
+            if options['subckt']:
+                tok[0] = "X" + tok[0]
+            tmp += [ " ".join(tok) ]
+            continue
+
+        # Anything else. What to do, preserve AS-IS with warning, or 
+        # flag them as ERRORs?
+        tmp += [ "*cdl2spi.py: ERROR: unrecognized line:", i ]
+        err+=1
+        # tmp += [ "*cdl2spi.py: WARNING: unrecognized line:", " ".join(tok) ]
+        # tmp += [ "*cdl2spi.py: WARNING: unrecognized line:", i ]
+        # warn+=1
+
+    # Re-wrap continuation lines at 80 characters
+    lines = []
+    for line in tmp:
+        lines.append('\n+ '.join(textwrap.wrap(line, 80)))
+
+    # Write output
+
+    if fnmOut == sys.stdout:
+        for i in lines:
+            print(i)
+    else:
+        try:
+            with open(fnmOut, 'w') as outFile:
+                for i in lines:
+                    print(i, file=outFile)
+        except:
+            print('cdl2spi.py: failed to open ' + fnmOut + ' for writing.', file=sys.stderr)
+            return 1
+
+    # exit status: indicates if there were errors.
+    print( "*cdl2spi.py: %d errors, %d warnings" % (err, warn))
+    return err
+
+if __name__ == '__main__':
+
+    options = {}
+
+    # Set option defaults
+    options['debug'] = False
+    options['subckt'] = False
+    options['dscale'] = ''
+    options['addinm'] = []
+    options['ignore'] = []
+
+    arguments = []
+    for item in sys.argv[1:]:
+        if item.find('-', 0) == 0:
+            thisopt = item.split('=')
+            optname = thisopt[0][1:]
+            optval = '='.join(thisopt[1:])
+            if not optname in options:
+                print('Unknown option -' + optname + '; ignoring.')
+            else:
+                lastoptval = options[optname]
+                if len(thisopt) == 1:
+                    options[optname] = True
+                elif lastoptval == '':
+                    options[optname] = optval
+                else:
+                    options[optname].append(optval)
+        else:
+            arguments.append(item)
+
+    # Supported primitive devices (FET, diode, resistor, capacitor, bipolar)
+    primch  = 'mdrcq'
+    primch2 = 'mdrcqx'
+
+    if len(arguments) > 0:
+        fnmIn = arguments[0]
+
+    if len(arguments) > 1:
+        fnmOut = arguments[1]
+    else:
+        fnmOut = sys.stdout
+
+    if options['debug']:
+        test_mapSubcktInst1()
+        test_mapSubcktInst2()
+        test_mapMfactor()
+        test_CDLparam()
+        test_CDLtermModel()
+        test_mapDiscard()
+        test_mapPartAssign()
+        test_mapLookup()
+        test_mapDiode()
+        test_mapMos()
+        test_mapCap()
+        test_mapRes()
+        test_mapBipolar()
+
+    elif len(arguments) > 2 or len(arguments) < 1 :
+        print('Usage: cdl2spi.py <cdlFileName> [<spiFileName>]')
+        print('   Options:' )
+        print('       -debug              run debug tests')
+        print('       -dscale=<suffix>    rescale lengths with <suffix>')
+        print('       -addinm=<param>     add multiplier parameter <param>')
+        print('       -ignore=<param>     ignore parameter <param>')
+        print('       -subckt             convert primitive devices to subcircuits')
+        sys.exit(1)
+
+    else:
+        if options['debug'] == True:
+            print('Diagnostic:  options = ' + str(options))
+        result = cdl2spice(fnmIn, fnmOut, options)
+        sys.exit(result)
+
diff --git a/common/change_gds_date.py b/common/change_gds_date.py
new file mode 100755
index 0000000..0aa0056
--- /dev/null
+++ b/common/change_gds_date.py
@@ -0,0 +1,142 @@
+#!/bin/env python3
+# Script to read a GDS file, modify the timestamp(s), and rewrite the GDS file.
+
+import os
+import sys
+import datetime
+
+def usage():
+    print('change_gds_date.py <create_stamp> <mod_stamp> <path_to_gds_in> [<path_to_gds_out>]')
+
+if __name__ == '__main__':
+    debug = False
+
+    if len(sys.argv) == 1:
+        print("No options given to change_gds_date.py.")
+        usage()
+        sys.exit(0)
+
+    optionlist = []
+    arguments = []
+
+    for option in sys.argv[1:]:
+        if option.find('-', 0) == 0:
+            optionlist.append(option)
+        else:
+            arguments.append(option)
+
+    if len(arguments) < 3 or len(arguments) > 4:
+        print("Wrong number of arguments given to change_gds_date.py.")
+        usage()
+        sys.exit(0)
+
+    if '-debug' in optionlist:
+        debug = True
+
+    createstamp = arguments[0]
+    modstamp = arguments[1]
+    source = arguments[2]
+
+    # If modstamp is zero or negative, then set the modification timestamp
+    # to be the same as the creation timestamp.
+    try:
+        if int(modstamp) <= 0:
+            modstamp = createstamp
+    except:
+        pass
+
+    # If only three arguments are provided, then overwrite the source file.
+    if len(arguments) == 4:
+        dest = arguments[3]
+    else:
+        dest = arguments[2]
+
+    sourcedir = os.path.split(source)[0]
+    gdsinfile = os.path.split(source)[1]
+
+    destdir = os.path.split(dest)[0]
+    gdsoutfile = os.path.split(dest)[1]
+
+    with open(source, 'rb') as ifile:
+        gdsdata = ifile.read()
+
+    # Generate 12-byte modification timestamp data from date.
+    try:
+        modtime = datetime.datetime.fromtimestamp(int(modstamp))
+    except:
+        modtime = datetime.datetime.strptime(modstamp, "%m/%d/%Y %H:%M:%S")
+
+    modyear = modtime.year - 1900
+
+    year = modyear.to_bytes(2, byteorder='big')
+    month = modtime.month.to_bytes(2, byteorder='big')
+    day = modtime.day.to_bytes(2, byteorder='big')
+    hour = modtime.hour.to_bytes(2, byteorder='big')
+    minute = modtime.minute.to_bytes(2, byteorder='big')
+    second = modtime.second.to_bytes(2, byteorder='big')
+
+    gdsmodstamp = year + month + day + hour + minute + second
+
+    # Generate 12-byte creation timestamp data from date.
+    try:
+        createtime = datetime.datetime.fromtimestamp(int(createstamp))
+    except:
+        createtime = datetime.datetime.strptime(createstamp, "%m/%d/%Y %H:%M:%S")
+
+    createyear = createtime.year - 1900
+
+    year = createyear.to_bytes(2, byteorder='big')
+    month = createtime.month.to_bytes(2, byteorder='big')
+    day = createtime.day.to_bytes(2, byteorder='big')
+    hour = createtime.hour.to_bytes(2, byteorder='big')
+    minute = createtime.minute.to_bytes(2, byteorder='big')
+    second = createtime.second.to_bytes(2, byteorder='big')
+
+    gdscreatestamp = year + month + day + hour + minute + second
+
+    # To be done:  Allow the user to select which datestamps to change
+    # (library or structure).  Otherwise, apply the same datestamps to both.
+
+    recordtypes = ['beginstr', 'beginlib']
+    recordfilter = [5, 1]
+
+    datalen = len(gdsdata)
+    dataptr = 0
+    while dataptr < datalen:
+        # Read stream records up to any string, then search for search text.
+        bheader = gdsdata[dataptr:dataptr + 2]
+        reclen = int.from_bytes(bheader, 'big')
+        if reclen == 0:
+            print('Error: found zero-length record at position ' + str(dataptr))
+            break
+
+        rectype = gdsdata[dataptr + 2]
+        datatype = gdsdata[dataptr + 3]
+
+        brectype = rectype.to_bytes(1, byteorder='big')
+        bdatatype = datatype.to_bytes(1, byteorder='big')
+
+        if rectype in recordfilter:
+            # Datatype should be 2
+            if datatype != 2:
+                print('Error:  Header data type is not 2-byte integer!')
+            if reclen != 28:
+                print('Error:  Header record length is not 28!')
+            if debug:
+                print('Record type = ' + str(rectype) + ' data type = ' + str(datatype) + ' length = ' + str(reclen))
+
+            before = gdsdata[0:dataptr]
+            after = gdsdata[dataptr + reclen:]
+
+            # Assemble the new record
+            newrecord = bheader + brectype + bdatatype + gdscreatestamp + gdsmodstamp
+            # Reassemble the GDS data around the new record
+            gdsdata = before + newrecord + after
+
+        # Advance the pointer past the data
+        dataptr += reclen
+
+    with open(dest, 'wb') as ofile:
+        ofile.write(gdsdata)
+
+    exit(0)
diff --git a/common/change_gds_string.py b/common/change_gds_string.py
new file mode 100755
index 0000000..b8a0321
--- /dev/null
+++ b/common/change_gds_string.py
@@ -0,0 +1,127 @@
+#!/bin/env python3
+# Script to read a GDS file, modify the given string, and rewrite the GDS file.
+# The string may be a substring;  the GDS file will be parsed completely for
+# library name, structure name, instance name, and other strings, and the
+# replacement made everywhere it occurs, finding the bounds of the entire
+# string around the search text, and adjusting the record bounds accordingly.
+
+import os
+import sys
+
+def usage():
+    print('change_gds_string.py <old_string> <new_string> <path_to_gds_in> [<path_to_gds_out>]')
+
+if __name__ == '__main__':
+    debug = False
+
+    if len(sys.argv) == 1:
+        print("No options given to change_gds_string.py.")
+        usage()
+        sys.exit(0)
+
+    optionlist = []
+    arguments = []
+
+    for option in sys.argv[1:]:
+        if option.find('-', 0) == 0:
+            optionlist.append(option)
+        else:
+            arguments.append(option)
+
+    if len(arguments) < 3 or len(arguments) > 4:
+        print("Wrong number of arguments given to change_gds_string.py.")
+        usage()
+        sys.exit(0)
+
+    if '-debug' in optionlist:
+        debug = True
+
+    oldstring = arguments[0]
+    newstring = arguments[1]
+    source = arguments[2]
+
+    # If only three arguments are provided, then overwrite the source file.
+    if len(arguments) == 4:
+        dest = arguments[3]
+    else:
+        dest = arguments[2]
+
+    sourcedir = os.path.split(source)[0]
+    gdsinfile = os.path.split(source)[1]
+
+    destdir = os.path.split(dest)[0]
+    gdsoutfile = os.path.split(dest)[1]
+
+    with open(source, 'rb') as ifile:
+        gdsdata = ifile.read()
+
+    # To be done:  Allow the user to select a specific record type or types
+    # in which to restrict the string substitution.  If no restrictions are
+    # specified, then substitue in library name, structure name, and strings.
+
+    recordtypes = ['libname', 'strname', 'sname', 'string']
+    recordfilter = [2, 6, 18, 25]
+    bsearch = bytes(oldstring, 'ascii')
+    brep = bytes(newstring, 'ascii')
+
+    datalen = len(gdsdata)
+    if debug:
+        print('Original data length = ' + str(datalen))
+    dataptr = 0
+    while dataptr < datalen:
+        # Read stream records up to any string, then search for search text.
+        bheader = gdsdata[dataptr:dataptr + 2]
+        reclen = int.from_bytes(bheader, 'big')
+        newlen = reclen
+        if newlen == 0:
+            print('Error: found zero-length record at position ' + str(dataptr))
+            break
+
+        rectype = gdsdata[dataptr + 2]
+        datatype = gdsdata[dataptr + 3]
+
+        if rectype in recordfilter:
+            # Datatype 6 is STRING
+            if datatype == 6:
+                if debug:
+                    print('Record type = ' + str(rectype) + ' data type = ' + str(datatype) + ' length = ' + str(reclen))
+
+                bstring = gdsdata[dataptr + 4: dataptr + reclen]
+                repstring = bstring.replace(bsearch, brep)
+                if repstring != bstring:
+                    before = gdsdata[0:dataptr]
+                    after = gdsdata[dataptr + reclen:]
+                    newlen = len(repstring) + 4
+                    # Record sizes must be even
+                    if newlen % 2 != 0:
+                        # Was original string padded with null byte?  If so,
+                        # remove the null byte and reduce newlen.  Otherwise,
+                        # add a null byte and increase newlen.
+                        if bstring[-1] == 0:
+                            repstring = repstring[0:-1]
+                            newlen -= 1
+                        else:
+                            repstring += b'\x00'
+                            newlen += 1
+                            
+                    bnewlen = newlen.to_bytes(2, byteorder='big')
+                    brectype = rectype.to_bytes(1, byteorder='big')
+                    bdatatype = datatype.to_bytes(1, byteorder='big')
+
+                    # Assemble the new record
+                    newrecord = bnewlen + brectype + bdatatype + repstring
+                    # Reassemble the GDS data around the new record
+                    gdsdata = before + newrecord[0:newlen] + after
+                    # Adjust the data end location
+                    datalen += (newlen - reclen)
+
+                    if debug:
+                        print('Replaced ' + str(bstring) + ' with ' + str(repstring)) 
+
+        # Advance the pointer past the data
+        dataptr += newlen
+
+    with open(dest, 'wb') as ofile:
+        ofile.write(gdsdata)
+
+    exit(0)
diff --git a/common/changepath.py b/common/changepath.py
new file mode 100755
index 0000000..3ebd38c
--- /dev/null
+++ b/common/changepath.py
@@ -0,0 +1,75 @@
+#!/usr/bin/env python3
+#
+# changepath.py:  Look up all .mag files in maglef paths from the root
+# PDK directory given as the first argument, and replace them with the
+# path that is given as the second argument.
+#
+# The purpose of this script is to prepare a technology for relocation.
+# This script may be expanded to take care of all relocation issues.
+# For now, only the property "GDS_FILE" in each .mag file is modified.
+#
+# Usage, e.g.:
+#
+# changepath.py /home/tim/projects/efabless/tech/XFAB/EFXH035B/libs.ref/mag/D_CELLS /ef/tech/XFAB/EFXH035B/libs.ref/mag/D_CELLS
+
+import os
+import re
+import sys
+import glob
+
+def usage():
+    print("changepath.py <orig_path_to_dir> <target_path_to_dir>")
+    return 0
+
+if __name__ == '__main__':
+
+    if len(sys.argv) == 1:
+        usage()
+        sys.exit(0)
+
+    optionlist = []
+    arguments = []
+
+    for option in sys.argv[1:]:
+        if option.find('-', 0) == 0:
+            optionlist.append(option)
+        else:
+            arguments.append(option)
+
+    if len(arguments) != 2:
+        print("Wrong number of arguments given to changepath.py.")
+        usage()
+        sys.exit(0)
+
+    source = arguments[0]
+    target = arguments[1]
+
+    gdssource = source.replace('/mag/', '/gds/')
+    gdssource = gdssource.replace('/maglef/', '/gds/')
+    gdstarget = target.replace('/mag/', '/gds/')
+    gdstarget = gdstarget.replace('/maglef/', '/gds/')
+
+    magpath = source + '/*.mag'
+    sourcefiles = glob.glob(magpath)
+
+    if len(sourcefiles) == 0:
+        print("Warning:  No files were found in the path " + magpath + ".")
+
+    for file in sourcefiles:
+        # print("Converting file " + file)
+        with open(file, 'r') as ifile:
+            magtext = ifile.read().splitlines() 
+
+        proprex = re.compile('string[ \t]+GDS_FILE[ \t]+([^ \t]+)')
+        with open(file, 'w') as ofile:
+            for line in magtext:
+                pmatch = proprex.match(line)
+                if pmatch:
+                    filepath = pmatch.group(1)
+                    if filepath.startswith(gdssource):
+                        print('string GDS_FILE ' + filepath.replace(gdssource, gdstarget), file=ofile)
+                    else:
+                        print(line, file=ofile)
+                else:
+                    print(line, file=ofile)
+
diff --git a/common/changetech.sh b/common/changetech.sh
new file mode 100755
index 0000000..383aac6
--- /dev/null
+++ b/common/changetech.sh
@@ -0,0 +1,9 @@
+#!/bin/sh
+#
+# Example file converts the techname in all magic database files.
+
+for i in `ls *.mag` ; do
+    /ef/efabless/bin/preproc.py -DEFXH035A=EFXH035B $i > tmp.out
+    mv tmp.out $i ;
+done
+
diff --git a/common/compare_dirs.py b/common/compare_dirs.py
new file mode 100755
index 0000000..21f8dc6
--- /dev/null
+++ b/common/compare_dirs.py
@@ -0,0 +1,439 @@
+#!/usr/bin/env python3
+#
+# compare_dirs.py <path>
+#
+#
+# Compare the format subdirectories of <path> and report on which files do not appear
+# in all of them.  If a directory has no files in it, then it is ignored.
+#
+# NOTE:  This script was not designed for files in the "ef_format" file structure.
+
+import os
+import sys
+
+def compare_dirs(path, styles, formats, debug):
+    do_cdl = True if 'cdl' in formats else False
+    do_gds = True if 'gds' in formats else False
+    do_lef = True if 'lef' in formats else False
+    do_mag = True if 'mag' in formats else False
+    do_maglef = True if 'maglef' in formats else False
+    do_verilog = True if 'verilog' in formats else False
+
+    try:
+         d1 = os.listdir(path + '/cdl')
+    except:
+         d1 = []
+    d1e = list(item for item in d1 if os.path.splitext(item)[1] == '.cdl')
+    d1r = list(os.path.splitext(item)[0] for item in d1e)
+    try:
+        d2 = os.listdir(path + '/gds')
+    except:
+        d2 = []
+    d2e = list(item for item in d2 if os.path.splitext(item)[1] == '.gds')
+    d2r = list(os.path.splitext(item)[0] for item in d2e)
+    try:
+        d3 = os.listdir(path + '/lef')
+    except:
+        d3 = []
+    d3e = list(item for item in d3 if os.path.splitext(item)[1] == '.lef')
+    d3r = list(os.path.splitext(item)[0] for item in d3e)
+    try:
+        d4 = os.listdir(path + '/mag')
+    except:
+        d4 = []
+    d4e = list(item for item in d4 if os.path.splitext(item)[1] == '.mag')
+    d4r = list(os.path.splitext(item)[0] for item in d4e)
+    try:
+        d5 = os.listdir(path + '/maglef')
+    except:
+        d5 = []
+    d5e = list(item for item in d5 if os.path.splitext(item)[1] == '.mag')
+    d5r = list(os.path.splitext(item)[0] for item in d5e)
+    try:
+        d6 = os.listdir(path + '/verilog')
+    except:
+        d6 = []
+    d6e = list(item for item in d6 if os.path.splitext(item)[1] == '.v')
+    d6r = list(os.path.splitext(os.path.splitext(item)[0])[0] for item in d6e)
+ 
+    d1r.sort()
+    d2r.sort()
+    d3r.sort()
+    d4r.sort()
+    d5r.sort()
+    d6r.sort()
+
+    d1_2 = list(item for item in d1r if item not in d2r)
+    d1_3 = list(item for item in d1r if item not in d3r)
+    d1_4 = list(item for item in d1r if item not in d4r)
+    d1_5 = list(item for item in d1r if item not in d5r)
+    d1_6 = list(item for item in d1r if item not in d6r)
+
+    d2_1 = list(item for item in d2r if item not in d1r)
+    d2_3 = list(item for item in d2r if item not in d3r)
+    d2_4 = list(item for item in d2r if item not in d4r)
+    d2_5 = list(item for item in d2r if item not in d5r)
+    d2_6 = list(item for item in d2r if item not in d6r)
+
+    d3_1 = list(item for item in d3r if item not in d1r)
+    d3_2 = list(item for item in d3r if item not in d2r)
+    d3_4 = list(item for item in d3r if item not in d4r)
+    d3_5 = list(item for item in d3r if item not in d5r)
+    d3_6 = list(item for item in d3r if item not in d6r)
+
+    d4_1 = list(item for item in d4r if item not in d1r)
+    d4_2 = list(item for item in d4r if item not in d2r)
+    d4_3 = list(item for item in d4r if item not in d3r)
+    d4_5 = list(item for item in d4r if item not in d5r)
+    d4_6 = list(item for item in d4r if item not in d6r)
+
+    d5_1 = list(item for item in d5r if item not in d1r)
+    d5_2 = list(item for item in d5r if item not in d2r)
+    d5_3 = list(item for item in d5r if item not in d3r)
+    d5_4 = list(item for item in d5r if item not in d4r)
+    d5_6 = list(item for item in d5r if item not in d6r)
+
+    d6_1 = list(item for item in d6r if item not in d1r)
+    d6_2 = list(item for item in d6r if item not in d2r)
+    d6_3 = list(item for item in d6r if item not in d3r)
+    d6_4 = list(item for item in d6r if item not in d4r)
+    d6_5 = list(item for item in d6r if item not in d5r)
+
+    d_complete = []
+    if do_cdl:
+        d_complete.extend(list(item for item in d1r if item not in d_complete))
+    if do_gds:
+        d_complete.extend(list(item for item in d2r if item not in d_complete))
+    if do_lef:
+        d_complete.extend(list(item for item in d3r if item not in d_complete))
+    if do_mag:
+        d_complete.extend(list(item for item in d4r if item not in d_complete))
+    if do_maglef:
+        d_complete.extend(list(item for item in d5r if item not in d_complete))
+    if do_verilog:
+        d_complete.extend(list(item for item in d6r if item not in d_complete))
+
+    d_all = d_complete
+    if do_cdl:
+        d_all = list(item for item in d_all if item in d1r)
+    if do_gds:
+        d_all = list(item for item in d_all if item in d2r)
+    if do_lef:
+        d_all = list(item for item in d_all if item in d3r)
+    if do_mag:
+        d_all = list(item for item in d_all if item in d4r)
+    if do_maglef:
+        d_all = list(item for item in d_all if item in d5r)
+    if do_verilog:
+        d_all = list(item for item in d_all if item in d6r)
+
+    d_notall = list(item for item in d_complete if item not in d_all)
+
+    d_all.sort()
+    d_complete.sort()
+    d_notall.sort()
+    
+    if debug:
+        print('Selected styles option: ' + ','.join(styles))
+        print('Selected formats option: ' + ','.join(formats))
+        print('\nd_complete = ' + ','.join(d_complete))
+        print('\nd_notall = ' + ','.join(d_notall) + '\n')
+
+    print('Library file type cross-correlation:' + '\n')
+
+    if 'allgood' in styles:
+        print('Cells appearing in all libraries:')
+        for cell in d_all.sort():
+           print(cell)
+
+    if 'cross' in styles:
+        # Print which cells appear in one format but not in another, for all format pairs
+        if do_cdl:
+            print('')
+            if do_gds and len(d1_2) > 0:
+                print('Cells appearing in cdl/ but not in gds/:')
+                for cell in d1_2:
+                    print(cell)
+            if do_lef and len(d1_3) > 0:
+                print('Cells appearing in cdl/ but not in lef/:')
+                for cell in d1_3:
+                    print(cell)
+            if do_mag and len(d1_4) > 0:
+                print('Cells appearing in cdl/ but not in mag/:')
+                for cell in d1_4:
+                    print(cell)
+            if do_maglef and len(d1_5) > 0:
+                print('Cells appearing in cdl/ but not in maglef/:')
+                for cell in d1_5:
+                    print(cell)
+            if do_verilog and len(d1_6) > 0:
+                print('Cells appearing in cdl/ but not in verilog/:')
+                for cell in d1_6:
+                    print(cell)
+
+        if do_gds:
+            print('')
+            if do_cdl and len(d2_1) > 0:
+                print('Cells appearing in gds/ but not in cdl/:')
+                for cell in d2_1:
+                    print(cell)
+            if do_lef and len(d2_3) > 0:
+                print('Cells appearing in gds/ but not in lef/:')
+                for cell in d2_3:
+                    print(cell)
+            if do_mag and len(d2_4) > 0:
+                print('Cells appearing in gds/ but not in mag/:')
+                for cell in d2_4:
+                    print(cell)
+            if do_maglef and len(d2_5) > 0:
+                print('Cells appearing in gds/ but not in maglef/:')
+                for cell in d2_5:
+                    print(cell)
+            if do_verilog and len(d2_6) > 0:
+                print('Cells appearing in gds/ but not in verilog/:')
+                for cell in d2_6:
+                    print(cell)
+
+        if do_lef:
+            print('')
+            if do_cdl and len(d3_1) > 0:
+                print('Cells appearing in lef/ but not in cdl/:')
+                for cell in d3_1:
+                    print(cell)
+            if do_gds and len(d3_2) > 0:
+                print('Cells appearing in lef/ but not in gds/:')
+                for cell in d3_2:
+                    print(cell)
+            if do_mag and len(d3_4) > 0:
+                print('Cells appearing in lef/ but not in mag/:')
+                for cell in d3_4:
+                    print(cell)
+            if do_maglef and len(d3_5) > 0:
+                print('Cells appearing in lef/ but not in maglef/:')
+                for cell in d3_5:
+                    print(cell)
+            if do_verilog and len(d3_6) > 0:
+                print('Cells appearing in lef/ but not in verilog/:')
+                for cell in d3_6:
+                    print(cell)
+
+        if do_mag:
+            print('')
+            if do_cdl and len(d4_1) > 0:
+                print('Cells appearing in mag/ but not in cdl/:')
+                for cell in d4_1:
+                    print(cell)
+            if do_gds and len(d4_2) > 0:
+                print('Cells appearing in mag/ but not in gds/:')
+                for cell in d4_2:
+                    print(cell)
+            if do_lef and len(d4_3) > 0:
+                print('Cells appearing in mag/ but not in lef/:')
+                for cell in d4_3:
+                    print(cell)
+            if do_maglef and len(d4_5) > 0:
+                print('Cells appearing in mag/ but not in maglef/:')
+                for cell in d4_5:
+                    print(cell)
+            if do_verilog and len(d4_6) > 0:
+                print('Cells appearing in mag/ but not in verilog/:')
+                for cell in d4_6:
+                    print(cell)
+
+        if do_maglef:
+            print('')
+            if do_cdl and len(d5_1) > 0:
+                print('Cells appearing in maglef/ but not in cdl/:')
+                for cell in d5_1:
+                    print(cell)
+            if do_gds and len(d5_2) > 0:
+                print('Cells appearing in maglef/ but not in gds/:')
+                for cell in d5_2:
+                    print(cell)
+            if do_lef and len(d5_3) > 0:
+                print('Cells appearing in maglef/ but not in lef/:')
+                for cell in d5_3:
+                    print(cell)
+            if do_mag and len(d5_4) > 0:
+                print('Cells appearing in maglef/ but not in mag/:')
+                for cell in d5_4:
+                    print(cell)
+            if do_verilog and len(d5_6) > 0:
+                print('Cells appearing in maglef/ but not in verilog/:')
+                for cell in d5_6:
+                    print(cell)
+        
+        if do_verilog:
+            print('')
+            if do_cdl and len(d6_1) > 0:
+                print('Cells appearing in verilog/ but not in cdl/:')
+                for cell in d6_1:
+                    print(cell)
+            if do_gds and len(d6_2) > 0:
+                print('Cells appearing in verilog/ but not in gds/:')
+                for cell in d6_2:
+                    print(cell)
+            if do_lef and len(d6_3) > 0:
+                print('Cells appearing in verilog/ but not in lef/:')
+                for cell in d6_3:
+                    print(cell)
+            if do_mag and len(d6_4) > 0:
+                print('Cells appearing in verilog/ but not in mag/:')
+                for cell in d6_4:
+                    print(cell)
+            if do_maglef and len(d6_5) > 0:
+                print('Cells appearing in verilog/ but not in maglef/:')
+                for cell in d6_5:
+                    print(cell)
+
+    if 'cell' in styles:
+        # Print one cell per row, with list of formats per cell
+        for cell in d_complete:
+            informats = []
+            if do_cdl and cell in d1r:
+                informats.append('CDL')
+            if do_gds and cell in d2r:
+                informats.append('GDS')
+            if do_lef and cell in d3r:
+                informats.append('LEF')
+            if do_mag and cell in d4r:
+                informats.append('mag')
+            if do_maglef and cell in d5r:
+                informats.append('maglef')
+            if do_verilog and cell in d6r:
+                informats.append('verilog')
+            print(cell + ': ' + ','.join(informats))
+
+    if 'notcell' in styles:
+        # Print one cell per row, with list of missing formats per cell
+        for cell in d_complete:
+            informats = []
+            if do_cdl and cell not in d1r:
+                informats.append('CDL')
+            if do_gds and cell not in d2r:
+                informats.append('GDS')
+            if do_lef and cell not in d3r:
+                informats.append('LEF')
+            if do_mag and cell not in d4r:
+                informats.append('mag')
+            if do_maglef and cell not in d5r:
+                informats.append('maglef')
+            if do_verilog and cell not in d6r:
+                informats.append('verilog')
+            print(cell + ': ' + ','.join(informats))
+
+    if 'table' in styles:
+        cellnamelen = 0
+        for cell in d_complete:
+            if len(cell) > cellnamelen:
+                cellnamelen = len(cell)
+
+        # Print one cell per row, with list of formats per cell in tabular form
+        outline = 'cell'
+        outline += ' ' * (cellnamelen - 5)
+        fmtspc = 0
+        if do_cdl:
+            outline += ' CDL'
+            fmtspc += 4
+        if do_gds:
+            outline += ' GDS'
+            fmtspc += 4
+        if do_lef:
+            outline += ' LEF'
+            fmtspc += 4
+        if do_mag:
+            outline += ' mag'
+            fmtspc += 4
+        if do_maglef:
+            outline += ' maglef'
+            fmtspc += 7
+        if do_verilog:
+            outline += ' verilog'
+            fmtspc += 8
+        print(outline)
+        print('-' * (cellnamelen + fmtspc))
+        for cell in d_complete:
+            informats = []
+            outline = cell
+            outline += ' ' * (cellnamelen - len(cell))
+            if do_cdl:
+                if cell in d1r:
+                    outline += ' X  '
+                else:
+                    outline += '    '
+            if do_gds:
+                if cell in d2r:
+                    outline += ' X  '
+                else:
+                    outline += '    '
+            if do_lef:
+                if cell in d3r:
+                    outline += ' X  '
+                else:
+                    outline += '    '
+            if do_mag:
+                if cell in d4r:
+                    outline += ' X  '
+                else:
+                    outline += '    '
+            if do_maglef:
+                if cell in d5r:
+                    outline += ' X     '
+                else:
+                    outline += '       '
+            if do_verilog:
+                if cell in d6r:
+                    outline += ' X'
+                else:
+                    outline += '  '
+            print(outline)
+        print('-' * (cellnamelen + fmtspc))
+
+def usage():
+    print('compare_dirs.py <path_to_dir> [-styles=<style_list>] [-debug] [-formats=<format_list>|"all"]')
+    print('    <format_list>:  Comma-separated list of one or more of the following formats:')
+    print('             cdl, gds, lef, verilog, mag, maglef')
+    print('    <style_list>: Comma-separated list of one or more of the following styles:')
+    print('             allgood, cross, cell, notcell, table')
+    return 0
+
+if __name__ == '__main__':
+
+    options = []
+    arguments = []
+    for item in sys.argv[1:]:
+        if item.find('-', 0) == 0:
+            options.append(item)
+        else:
+            arguments.append(item)
+
+    if len(arguments) < 1:
+        print("Not enough options given to compare_dirs.py.")
+        usage()
+        sys.exit(0)
+
+    debug = True if '-debug' in options else False
+
+    allformats = ['cdl', 'gds', 'lef', 'mag', 'maglef', 'verilog']
+    allstyles = ['allgood', 'cross', 'cell', 'notcell', 'table']
+
+    formats = allformats
+    styles = ['table']
+
+    for option in options:
+        if '=' in option:
+            optpair = option.split('=')
+            if optpair[0] == '-styles' or optpair[0] == '-style':
+                if optpair[1] == 'all':
+                    styles = allstyles
+                else:
+                    styles = optpair[1].split(',')
+            elif optpair[0] == '-formats' or optpair[0] == '-format':
+                if optpair[1] == 'all':
+                    formats = allformats
+                else:
+                    formats = optpair[1].split(',')
+
+    path = arguments[0]
+    compare_dirs(path, styles, formats, debug)
+    sys.exit(0)
diff --git a/common/consoletext.py b/common/consoletext.py
new file mode 100755
index 0000000..03276fb
--- /dev/null
+++ b/common/consoletext.py
@@ -0,0 +1,54 @@
+#!/usr/bin/env python3
+#
+#--------------------------------------------------------
+"""
+  consoletext --- extends tkinter class Text
+  with stdout and stderr redirection.
+"""
+#--------------------------------------------------------
+# Written by Tim Edwards
+# efabless, inc.
+# September 11, 2016
+# Version 0.1
+#--------------------------------------------------------
+
+import sys
+import tkinter
+
+class ConsoleText(tkinter.Text):
+    linelimit = 500
+    class IORedirector(object):
+        '''A general class for redirecting I/O to this Text widget.'''
+        def __init__(self,text_area):
+            self.text_area = text_area
+
+    class StdoutRedirector(IORedirector):
+        '''A class for redirecting stdout to this Text widget.'''
+        def write(self,str):
+            self.text_area.write(str,False)
+
+    class StderrRedirector(IORedirector):
+        '''A class for redirecting stderr to this Text widget.'''
+        def write(self,str):
+            self.text_area.write(str,True)
+
+    def __init__(self, master=None, cnf={}, **kw):
+        '''See the __init__ for Tkinter.Text.'''
+
+        tkinter.Text.__init__(self, master, cnf, **kw)
+
+        self.tag_configure('stdout',background='white',foreground='black')
+        self.tag_configure('stderr',background='white',foreground='red')
+        # None of these works!  Cannot change selected text background!
+        self.config(selectbackground='blue', selectforeground='white')
+        self.tag_configure('sel',background='blue',foreground='white')
+
+    def write(self, val, is_stderr=False):
+        lines = int(self.index('end-1c').split('.')[0])
+        if lines > self.linelimit:
+            self.delete('1.0', str(lines - self.linelimit) + '.0')
+        self.insert('end',val,'stderr' if is_stderr else 'stdout')
+        self.see('end')
+
+    def limit(self, val):
+        self.linelimit = val
diff --git a/common/fixspice.py b/common/fixspice.py
new file mode 100755
index 0000000..a12ae42
--- /dev/null
+++ b/common/fixspice.py
@@ -0,0 +1,348 @@
+#!/bin/env python3
+#
+# fixspice ---
+#
+# This script fixes problems in SPICE models to make them ngspice-compatible.
+# The methods searched and corrected in this file correspond to ngspice
+# version 30.
+#
+# This script is a filter to be run by setting the name of this script as
+# the value to "filter=" for the model install in the PDK Makefile in
+# open_pdks.
+#
+# This script converted from the bash script by Risto Bell, with improvements.
+#
+# This script is minimally invasive to the original SPICE file, making changes
+# while preserving comments and line continuations.  In order to properly parse
+# the file, comments and line continuations are recorded and removed from the
+# file contents, then inserted again before the modified file is written.
+
+import re
+import os
+import sys
+import textwrap
+
+def filter(inname, outname, debug=False):
+    notparsed = []
+
+    # Read input.  Note that splitlines() performs the additional fix of
+    # correcting carriage-return linefeed (CRLF) line endings.
+    try:
+        with open(inname, 'r') as inFile:
+            spitext = inFile.read()
+    except:
+        print('fixspice.py: failed to open ' + inname + ' for reading.', file=sys.stderr)
+        return 1
+    else:
+        if debug:
+            print('Fixing ngspice incompatibilities in file ' + inname + '.')
+
+    # Due to the complexity of comment lines embedded within continuation lines,
+    # the result needs to be processed line by line.  Blank lines and comment
+    # lines are removed from the text, replaced with tab characters, and collected
+    # in a separate array.  Then the continuation lines are unfolded, and each
+    # line processed.  Then it is all put back together at the end.
+
+    # First replace all tabs with spaces so we can use tabs as markers.
+    spitext = spitext.replace('\t', '    ')
+
+    # Now do an initial line split
+    spilines = spitext.splitlines()
+
+    # Search lines for comments and blank lines and replace them with tabs
+    # Replace continuation lines with tabs and preserve the position.
+    spitext = ''
+    for line in spilines:
+        if len(line) == 0:
+            notparsed.append('\n')
+            spitext += '\t '
+        elif line[0] == '*':
+            notparsed.append('\n' + line)
+            spitext += '\t '
+        elif line[0] == '+':
+            notparsed.append('\n+')
+            spitext += '\t ' + line[1:]
+        else:
+            spitext += '\n' + line
+
+    # Now split back into an array of lines
+    spilines = spitext.splitlines()
+
+    # Process input with regexp
+
+    fixedlines = []
+    modified = False
+
+    # Regular expression to find 'agauss(a,b,c)' lines and record a, b, and c
+    grex = re.compile('[^{]agauss\(([^,]*),([^,]*),([^)]*)\)', re.IGNORECASE)
+
+    # Regular expression to determine if the line is a .PARAM card    
+    paramrex = re.compile('^\.param', re.IGNORECASE)
+    # Regular expression to determine if the line is a .MODEL card    
+    modelrex = re.compile('^\.model', re.IGNORECASE)
+    # Regular expression to detect a .SUBCKT card
+    subcktrex = re.compile('^\.subckt', re.IGNORECASE)
+
+    for line in spilines:
+        devtype = line[0].upper() if len(line) > 0 else 0
+
+        # NOTE:  All filter functions below take variable fixedline, alter it, then
+        # set fixedline to the altered text for the next filter function.
+
+        fixedline = line
+
+        # Fix: Wrap "agauss(...)" in brackets and remove single quotes around expressions
+        # Example:
+        #    before: + SD_DN_CJ=agauss(7.900e-04,'1.580e-05*__LOT__',1)   dn_cj=SD_DN_CJ"
+        #    after:  + SD_DN_CJ={agauss(7.900e-04,1.580e-05*__LOT__,1)}   dn_cj=SD_DN_CJ"
+
+        # for gmatch in grex.finditer(fixedline):
+        while True:
+            gmatch = grex.search(fixedline)
+            if gmatch:
+                fixpart1 = gmatch.group(1).strip("'")
+                fixpart2 = gmatch.group(2).strip("'")
+                fixpart3 = gmatch.group(3).strip("'")
+                fixedline = fixedline[0:gmatch.span(0)[0] + 1] + '{agauss(' + fixpart1 + ',' + fixpart2 + ',' + fixpart3 + ')}' + fixedline[gmatch.span(0)[1]:]
+                if debug:
+                    print('Fixed agauss() call.')
+            else:
+                break
+
+        # Fix: Check for "dtemp=dtemp" and remove unless in a .param line
+        pmatch = paramrex.search(fixedline)
+        if not pmatch:
+            altered = re.sub(' dtemp=dtemp', ' ', fixedline, flags=re.IGNORECASE)
+            if altered != fixedline:
+                fixedline = altered
+                if debug:
+                    print('Removed dtemp=dtemp from instance call')
+
+        # Fixes related to .MODEL cards:
+
+        mmatch = modelrex.search(fixedline)
+        if mmatch:
+
+            modeltype = fixedline.split()[2].lower()
+
+            if modeltype == 'nmos' or modeltype == 'pmos':
+
+                # Fixes related specifically to MOS models:
+
+                # Fix: Look for hspver=98.2 in FET model
+                altered = re.sub(' hspver[ ]*=[ ]*98\.2', ' ', fixedline, flags=re.IGNORECASE)
+                if altered != fixedline:
+                    fixedline = altered
+                    if debug:
+                        print('Removed hspver=98.2 from ' + modeltype + ' model')
+
+                # Fix:  Change level 53 FETs to level 49
+                altered = re.sub(' (level[ ]*=[ ]*)53', ' \g<1>49', fixedline, flags=re.IGNORECASE)
+                if altered != fixedline:
+                    fixedline = altered
+                    if debug:
+                        print('Changed level 53 ' + modeltype + ' to level 49')
+
+                # Fix: Look for version=4.3 or 4.5 FETs, change to 4.8.0 per recommendations
+                altered = re.sub(' (version[ ]*=[ ]*)4\.[35]', ' \g<1>4.8.0',
+					fixedline, flags=re.IGNORECASE)
+                if altered != fixedline:
+                    fixedline = altered
+                    if debug:
+                        print('Changed version 4.3/4.5 ' + modeltype + ' to version 4.8.0')
+    
+                # Fix: Look for mulu0= (NOTE:  Might be supported for bsim4?)
+                altered = re.sub('mulu0[ ]*=[ ]*[0-9.e+-]*', '', fixedline, flags=re.IGNORECASE)
+                if altered != fixedline:
+                    fixedline = altered
+                    if debug:
+                        print('Removed mulu0= from ' + modeltype + ' model')
+
+                # Fix: Look for apwarn=
+                altered = re.sub(' apwarn[ ]*=[ ]*[0-9.e+-]*', ' ', fixedline, flags=re.IGNORECASE)
+                if altered != fixedline:
+                    fixedline = altered
+                    if debug:
+                        print('Removed apwarn= from ' + modeltype + ' model')
+
+                # Fix: Look for lmlt=
+                altered = re.sub(' lmlt[ ]*=[ ]*[0-9.e+-]*', ' ', fixedline, flags=re.IGNORECASE)
+                if altered != fixedline:
+                    fixedline = altered
+                    if debug:
+                        print('Removed lmlt= from ' + modeltype + ' model')
+
+                # Fix: Look for nf=
+                altered = re.sub(' nf[ ]*=[ ]*[0-9.e+-]*', ' ', fixedline, flags=re.IGNORECASE)
+                if altered != fixedline:
+                    fixedline = altered
+                    if debug:
+                        print('Removed nf= from ' + modeltype + ' model')
+
+                # Fix: Look for sa/b/c/d/=
+                altered = re.sub(' s[abcd][ ]*=[ ]*[0-9.e+-]*', ' ', fixedline, flags=re.IGNORECASE)
+                if altered != fixedline:
+                    fixedline = altered
+                    if debug:
+                        print('Removed s[abcd]= from ' + modeltype + ' model')
+
+                # Fix: Look for binflag= in MOS .MODEL
+                altered = re.sub(' binflag[ ]*=[ ]*[0-9.e+-]*', ' ', fixedline, flags=re.IGNORECASE)
+                if altered != fixedline:
+                    fixedline = altered
+                    if debug:
+                        print('Removed binflag= from ' + modeltype + ' model')
+
+                # Fix: Look for wref, lref= in MOS .MODEL (note:  could be found in other models?)
+                altered = re.sub(' [wl]ref[ ]*=[ ]*[0-9.e+-]*', ' ', fixedline, flags=re.IGNORECASE)
+                if altered != fixedline:
+                    fixedline = altered
+                    if debug:
+                        print('Removed lref= from MOS .MODEL')
+
+            # TREF is a known issue for (apparently?) all device types
+            # Fix: Look for tref= in .MODEL
+            altered = re.sub(' tref[ ]*=[ ]*[0-9.e+-]*', ' ', fixedline, flags=re.IGNORECASE)
+            if altered != fixedline:
+                fixedline = altered
+                if debug:
+                    print('Removed tref= from ' + modeltype + ' model')
+
+            # Fix: Look for double-dot model binning and replace with single dot
+            altered = re.sub('\.\.([0-9]+)', '.\g<1>', fixedline, flags=re.IGNORECASE)
+            if altered != fixedline:
+                fixedline = altered
+                if debug:
+                    print('Collapsed double-dot model binning.')
+
+        # Various deleted parameters above may appear in instances, so those must be
+        # caught as well.  Need to catch expressions and variables in addition to the
+        # usual numeric assignments.
+
+        if devtype == 'M':
+            altered = re.sub(' nf=[^ \'\t]+', ' ', fixedline, flags=re.IGNORECASE)
+            altered = re.sub(' nf=\'[^\'\t]+\'', ' ', altered, flags=re.IGNORECASE)
+            if altered != fixedline:
+                fixedline = altered
+                if debug:
+                    print('Removed nf= from MOSFET device instance')
+
+            altered = re.sub(' mulu0=[^ \'\t]+', ' ', fixedline, flags=re.IGNORECASE)
+            altered = re.sub(' mulu0=\'[^\'\t]+\'', ' ', altered, flags=re.IGNORECASE)
+            if altered != fixedline:
+                fixedline = altered
+                if debug:
+                    print('Removed mulu0= from MOSFET device instance')
+
+            altered = re.sub(' s[abcd]=[^ \'\t]+', ' ', fixedline, flags=re.IGNORECASE)
+            altered = re.sub(' s[abcd]=\'[^\'\t]+\'', ' ', altered, flags=re.IGNORECASE)
+            if altered != fixedline:
+                fixedline = altered
+                if debug:
+                    print('Removed s[abcd]= from MOSFET device instance')
+
+        # Remove tref= from all device type instances
+        altered = re.sub(' tref=[^ \'\t]+', ' ', fixedline, flags=re.IGNORECASE)
+        altered = re.sub(' tref=\'[^\'\t]+\'', ' ', altered, flags=re.IGNORECASE)
+        if altered != fixedline:
+            fixedline = altered
+            if debug:
+                print('Removed tref= from device instance')
+
+        # Check for use of ".subckt ...  <name>=l" (or <name>=w) with no antecedent
+        # for 'w' or 'l'.  It is the responsibility of the technology file for extraction
+        # to produce the correct name to pass to the subcircuit for length or width.
+
+        smatch = subcktrex.match(fixedline)
+        if smatch:
+            altered = fixedline
+            if fixedline.lower().endswith('=l'):
+                if ' l=' not in fixedline.lower():
+                    altered=re.sub( '=l$', '=0', fixedline, flags=re.IGNORECASE)
+            elif '=l ' in fixedline.lower():
+                if ' l=' not in fixedline.lower():
+                    altered=re.sub( '=l ', '=0 ', altered, flags=re.IGNORECASE)
+            if altered != fixedline:
+                fixedline = altered
+                if debug:
+                    print('Replaced use of "l" with no definition in .subckt line')
+
+            altered = fixedline
+            if fixedline.lower().endswith('=w'):
+                if ' w=' not in fixedline.lower():
+                    altered=re.sub( '=w$', '=0', fixedline, flags=re.IGNORECASE)
+            elif '=w ' in fixedline.lower():
+                if ' w=' not in fixedline.lower():
+                    altered=re.sub( '=w ', '=0 ', altered, flags=re.IGNORECASE)
+            if altered != fixedline:
+                fixedline = altered
+                if debug:
+                    print('Replaced use of "w" with no definition in .subckt line')
+
+        fixedlines.append(fixedline)
+        if fixedline != line:
+            modified = True
+
+    # Reinsert embedded comments and continuation lines
+    if debug:
+        print('Reconstructing output')
+    olines = []
+    for line in fixedlines:
+        while '\t ' in line:
+            line = line.replace('\t ', notparsed.pop(0), 1)
+        olines.append(line)
+
+    fixedlines = '\n'.join(olines).strip()
+    olines = fixedlines.splitlines()
+
+    # Write output
+    if debug:
+        print('Writing output')
+    if outname == None:
+        for line in olines:
+            print(line)
+    else:
+        # If the output is a symbolic link but no modifications have been made,
+        # then leave it alone.  If it was modified, then remove the symbolic
+        # link before writing.
+        if os.path.islink(outname):
+            if not modified:
+                return 0
+            else:
+                os.unlink(outname)
+        try:
+            with open(outname, 'w') as outFile:
+                for line in olines:
+                    print(line, file=outFile)
+        except:
+            print('fixspice.py: failed to open ' + outname + ' for writing.', file=sys.stderr)
+            return 1
+
+
+if __name__ == '__main__':
+
+    # This script expects to get one or two arguments.  One argument is
+    # mandatory and is the input file.  The other argument is optional and
+    # is the output file.  The output file and input file may be the same
+    # name, in which case the original input is overwritten.
+
+    options = []
+    arguments = []
+    for item in sys.argv[1:]:
+        if item.find('-', 0) == 0:
+            options.append(item[1:])
+        else:
+            arguments.append(item)
+
+    if len(arguments) > 0:
+        infilename = arguments[0]
+
+    if len(arguments) > 1:
+        outfilename = arguments[1]
+    else:
+        outfilename = None
+
+    debug = True if 'debug' in options else False
+
+    result = filter(infilename, outfilename, debug)
+    sys.exit(result)
diff --git a/common/foundry_install.py b/common/foundry_install.py
new file mode 100755
index 0000000..2067663
--- /dev/null
+++ b/common/foundry_install.py
@@ -0,0 +1,2376 @@
+#!/usr/bin/env python3
+#
+# foundry_install.py
+#
+# This file generates the local directory structure and populates the
+# directories with foundry vendor data.  The local directory (target)
+# should be a staging area, not a place where files are kept permanently.
+#
+# Options:
+#    -ef_format		Use efabless naming (libs.ref/techLEF),
+#			otherwise use generic naming (libs.tech/lef)
+#    -clean		Clear out and remove target directory before starting
+#    -source <path>	Path to source data top level directory
+#    -target <path>	Path to target (staging) top level directory
+#
+# All other options represent paths to vendor files.  They may all be
+# wildcarded with "*", or with specific escapes like "%l" for library
+# name or "%v" for version number (see below for a complete list of escape
+# sequences).
+#
+# Note only one of "-spice" or "-cdl" need be specified.  Since the
+# open source tools use ngspice, CDL files are converted to ngspice
+# syntax when needed.
+#
+#	-techlef <path>	Path to technology LEF file
+#	-doc <path>	Path to technology documentation
+#	-lef <path>	Path to LEF file
+#	-spice <path>	Path to SPICE netlists
+#	-cdl <path>	Path to CDL netlists
+#	-models <path>	Path to SPICE (primitive device) models
+#	-liberty <path>	Path to Liberty timing files
+#	-gds <path>	Path to GDS data
+#	-verilog <path>	Path to verilog models
+#
+#	-library <type> <name> [<target>]	See below
+#
+# For the "-library" option, any number of libraries may be supported, and
+# one "-library" option should be provided for each supported library.
+# <type> is one of:  "digital", "primitive", or "general".  Analog and I/O
+# libraries fall under the category "general", as they are all treated the
+# same way.  <name> is the vendor name of the library.  [<target>] is the
+# (optional) local name of the library.  If omitted, then the vendor name
+# is used for the target (there is no particular reason to specify a
+# different local name for a library).
+#
+# In special cases using options (see below), path may be "-", indicating
+# that there are no source files, but only to run compilations or conversions
+# on the files in the target directory.
+#
+# All options "-lef", "-spice", etc., can take the additional arguments
+# 	up  <number>
+#
+# to indicate that the source hierarchy should be copied from <number>
+# levels above the files.  For example, if liberty files are kept in
+# multiple directories according to voltage level, then
+#
+# 	-liberty x/y/z/PVT_*/*.lib
+#
+# would install all .lib files directly into libs.ref/<libname>/liberty/*.lib
+# (if "-ef_format" option specified, then: libs.ref/<libname>/liberty/*.lib)
+# while
+#
+# 	-liberty x/y/z/PVT_*/*.lib up 1
+#
+# would install all .lib files into libs.ref/liberty/<libname>/PVT_*/*.lib
+# (if "-ef_format" option specified, then: libs.ref/<libname>/liberty/PVT_*/*.lib)
+#
+# Please note that the INSTALL variable in the Makefile starts with "set -f"
+# to suppress the OS from doing wildcard substitution;  otherwise the
+# wildcards in the install options will get expanded by the OS before
+# being passed to the install script.
+#
+# Other library-specific arguments are:
+#
+#	nospec	:  Remove timing specification before installing
+#		    (used with verilog files;  needs to be extended to
+#		    liberty files)
+#	compile :  Create a single library from all components.  Used
+#		    when a foundry library has inconveniently split
+#		    an IP library (LEF, CDL, verilog, etc.) into
+#		    individual files.
+#	stub :	   Remove contents of subcircuits from CDL or SPICE
+#		    netlist files.
+#
+#	priv :	   Mark the contents being installed as privleged, and
+#		    put them in a separate root directory libs.priv
+#		    where they can be given additional read/write
+#		    restrictions.
+#
+#	exclude :  Followed by "=" and a comma-separated list of names.
+#		    exclude these files/modules/subcircuits.  Names may
+#		    also be wildcarded in "glob" format.
+#
+#	rename :   Followed by "=" and an alternative name.  For any
+#		    file that is a single entry, change the name of
+#		    the file in the target directory to this (To-do:
+#		    take regexps for multiple files).  When used with
+#		    "compile" or "compile-only", this refers to the
+# 		    name of the target compiled file.
+#
+#	noconvert : Install only; do not attempt to convert to other
+#		    formats (applies only to GDS, CDL, and LEF).
+#
+# NOTE:  This script can be called once for all libraries if all file
+# types (gds, cdl, lef, etc.) happen to all work with the same wildcards.
+# However, it is more likely that it will be called several times for the
+# same PDK, once to install I/O cells, once to install digital, and so
+# forth, as made possible by the wild-carding.
+
+import re
+import os
+import sys
+import glob
+import stat
+import shutil
+import fnmatch
+import subprocess
+
+def usage():
+    print("foundry_install.py [options...]")
+    print("   -copy             Copy files from source to target (default)")
+    print("   -ef_format        Use efabless naming conventions for local directories")
+    print("")
+    print("   -source <path>    Path to top of source directory tree")
+    print("   -target <path>    Path to top of target directory tree")
+    print("")
+    print("   -techlef <path>   Path to technology LEF file")
+    print("   -doc <path>       Path to technology documentation")
+    print("   -lef <path>       Path to LEF file")
+    print("   -spice <path>     Path to SPICE netlists")
+    print("   -cdl <path>       Path to CDL netlists")
+    print("   -models <path>    Path to SPICE (primitive device) models")
+    print("   -lib <path>       Path to Liberty timing files")
+    print("   -liberty <path>       Path to Liberty timing files")
+    print("   -gds <path>       Path to GDS data")
+    print("   -verilog <path>   Path to verilog models")
+    print("   -library <type> <name> [<target>]	 See below")
+    print("")
+    print(" All <path> names may be wild-carded with '*' ('glob'-style wild-cards)")
+    print("")
+    print(" All options with <path> other than source and target may take the additional")
+    print(" arguments 'up <number>', where <number> indicates the number of levels of")
+    print(" hierarchy of the source path to include when copying to the target.")
+    print("")
+    print(" Library <type> may be one of:")
+    print("    digital		Digital standard cell library")
+    print("    primitive	Primitive device library")
+    print("    general		All other library types (I/O, analog, etc.)")
+    print("")
+    print(" If <target> is unspecified then <name> is used for the target.")
+
+# Return a list of files after glob-style substituting into pathname.  This
+# mostly relies on glob.glob(), but uses the additional substitutions with
+# escape strings:
+#
+#   %v :  Match a version number in the form "major[.minor[.rev]]"
+#   %l :  substitute the library name
+#   %% :  substitute the percent character verbatim
+
+from distutils.version import LooseVersion
+
+#----------------------------------------------------------------------------
+#----------------------------------------------------------------------------
+
+def makeuserwritable(filepath):
+    if os.path.exists(filepath):
+        st = os.stat(filepath)
+        os.chmod(filepath, st.st_mode | stat.S_IWUSR)
+
+#----------------------------------------------------------------------------
+#----------------------------------------------------------------------------
+
+def substitute(pathname, library):
+    if library:
+        # Do %l substitution
+        newpathname = re.sub('%l', library, pathname)
+    else:
+        newpathname = pathname
+
+    if '%v' in newpathname:
+        vglob = re.sub('%v.*', '*', newpathname)
+        vlibs = glob.glob(vglob)
+        try:
+            vstr = vlibs[0][len(vglob)-1:]
+        except IndexError:
+            pass
+        else:
+            for vlib in vlibs[1:]:
+                vtest = vlib[len(vglob)-1:]
+                if LooseVersion(vtest) > LooseVersion(vstr):
+                    vstr = vtest
+            newpathname = re.sub('%v', vstr, newpathname)
+
+    if '%%' in newpathname:
+        newpathname = re.sub('%%', '%', newpathname)
+
+    return newpathname
+
+#----------------------------------------------------------------------------
+#----------------------------------------------------------------------------
+
+def get_gds_properties(magfile):
+    proprex = re.compile('^[ \t]*string[ \t]+(GDS_[^ \t]+)[ \t]+([^ \t]+)$')
+    proplines = []
+    if os.path.isfile(magfile):
+        with open(magfile, 'r') as ifile:
+            magtext = ifile.read().splitlines()
+            for line in magtext:
+                lmatch = proprex.match(line)
+                if lmatch:
+                    propline = lmatch.group(1) + ' ' + lmatch.group(2)
+                    proplines.append(propline)
+    return proplines
+
+#----------------------------------------------------------------------------
+# Read subcircuit ports from a CDL file, given a subcircuit name that should
+# appear in the file as a subcircuit entry, and return a dictionary of ports
+# and their indexes in the subcircuit line.
+#----------------------------------------------------------------------------
+
+def get_subckt_ports(cdlfile, subname):
+    portdict = {}
+    pidx = 1
+    portrex = re.compile('^\.subckt[ \t]+([^ \t]+)[ \t]+(.*)$', flags=re.IGNORECASE)
+    with open(cdlfile, 'r') as ifile:
+        cdltext = ifile.read()
+        cdllines = cdltext.replace('\n+', ' ').splitlines()
+        for line in cdllines:
+            lmatch = portrex.match(line)
+            if lmatch:
+                if lmatch.group(1).lower() == subname.lower():
+                    ports = lmatch.group(2).split()
+                    for port in ports:
+                        portdict[port.lower()] = pidx
+                        pidx += 1
+                    break
+    return portdict
+
+#----------------------------------------------------------------------------
+# Filter a verilog file to remove any backslash continuation lines, which
+# iverilog does not parse.  If targetroot is a directory, then find and
+# process all files in the path of targetroot.  If any file to be processed
+# is unmodified (has no backslash continuation lines), then ignore it.  If
+# any file is a symbolic link and gets modified, then remove the symbolic
+# link before overwriting with the modified file.
+#----------------------------------------------------------------------------
+
+def vfilefilter(vfile):
+    modified = False
+    with open(vfile, 'r') as ifile:
+        vtext = ifile.read()
+
+    # Remove backslash-followed-by-newline and absorb initial whitespace.  It
+    # is unclear what initial whitespace means in this context, as the use-
+    # case that has been seen seems to work under the assumption that leading
+    # whitespace is ignored up to the amount used by the last indentation.
+
+    vlines = re.sub('\\\\\n[ \t]*', '', vtext)
+
+    if vlines != vtext:
+        # File contents have been modified, so if this file was a symbolic
+        # link, then remove it.  Otherwise, overwrite the file with the
+        # modified contents.
+        if os.path.islink(vfile):
+            os.unlink(vfile)
+        with open(vfile, 'w') as ofile:
+            ofile.write(vlines)
+
+#----------------------------------------------------------------------------
+# Run a filter on verilog files that cleans up known syntax issues.
+# This is embedded in the foundry_install script and is not a custom
+# filter largely because the issue is in the tool, not the PDK.
+#----------------------------------------------------------------------------
+
+def vfilter(targetroot):
+    if os.path.isfile(targetroot):
+        vfilefilter(targetroot)
+    else:
+        vlist = glob.glob(targetroot + '/*')
+        for vfile in vlist:
+            if os.path.isfile(vfile):
+                vfilefilter(vfile)
+
+#----------------------------------------------------------------------------
+# For issues that are PDK-specific, a script can be written and put in
+# the PDK's custom/scripts/ directory, and passed to the foundry_install
+# script using the "filter" option.
+#----------------------------------------------------------------------------
+
+def tfilter(targetroot, filterscript, outfile=[]):
+    filterroot = os.path.split(filterscript)[1]
+    if os.path.isfile(targetroot):
+        print('   Filtering file ' + targetroot + ' with ' + filterroot)
+        sys.stdout.flush()
+        if not outfile:
+            outfile = targetroot
+        else:
+            # Make sure this file is writable (as the original may not be)
+            makeuserwritable(outfile)
+
+        fproc = subprocess.run([filterscript, targetroot, outfile],
+			stdin = subprocess.DEVNULL, stdout = subprocess.PIPE,
+			stderr = subprocess.PIPE, universal_newlines = True)
+        if fproc.stdout:
+            for line in fproc.stdout.splitlines():
+                print(line)
+        if fproc.stderr:
+            print('Error message output from filter script:')
+            for line in fproc.stderr.splitlines():
+                print(line)
+
+    else:
+        tlist = glob.glob(targetroot + '/*')
+        for tfile in tlist:
+            if os.path.isfile(tfile):
+                print('   Filtering file ' + tfile + ' with ' + filterroot)
+                sys.stdout.flush()
+                fproc = subprocess.run([filterscript, tfile, tfile],
+			stdin = subprocess.DEVNULL, stdout = subprocess.PIPE,
+			stderr = subprocess.PIPE, universal_newlines = True)
+                if fproc.stdout:
+                    for line in fproc.stdout.splitlines():
+                        print(line)
+                if fproc.stderr:
+                    print('Error message output from filter script:')
+                    for line in fproc.stderr.splitlines():
+                        print(line)
+
+#----------------------------------------------------------------------------
+# Given a destination directory holding individual verilog files of a number
+# of modules, create a single verilog library file named <alllibname> and place
+# it in the same directory.  This is done for the option "compile" if specified
+# for the "-verilog" install.
+#----------------------------------------------------------------------------
+
+def create_verilog_library(destlibdir, destlib, do_compile_only, do_stub, excludelist):
+
+    alllibname = destlibdir + '/' + destlib + '.v'
+    if os.path.isfile(alllibname):
+        os.remove(alllibname)
+
+    print('Diagnostic:  Creating consolidated verilog library ' + destlib + '.v')
+    vlist = glob.glob(destlibdir + '/*.v')
+    if alllibname in vlist:
+        vlist.remove(alllibname)
+
+    # Create exclude list with glob-style matching using fnmatch
+    if len(vlist) > 0:
+        vlistnames = list(os.path.split(item)[1] for item in vlist)
+        notvlist = []
+        for exclude in excludelist:
+            notvlist.extend(fnmatch.filter(vlistnames, exclude))
+
+        # Apply exclude list
+        if len(notvlist) > 0:
+            for file in vlist[:]:
+                if os.path.split(file)[1] in notvlist:
+                    vlist.remove(file)
+
+    if len(vlist) > 1:
+        print('New file is:  ' + alllibname)
+        with open(alllibname, 'w') as ofile:
+            allmodules = []
+            for vfile in vlist:
+                with open(vfile, 'r') as ifile:
+                    # print('Adding ' + vfile + ' to library.')
+                    vtext = ifile.read()
+                    modules = re.findall(r'[ \t\n]module[ \t]+([^ \t\n\(]+)', vtext)
+                    mseen = list(item for item in modules if item in allmodules)
+                    allmodules.extend(list(item for item in modules if item not in allmodules))
+                    vfilter = remove_redundant_modules(vtext, allmodules, mseen)
+                    # NOTE:  The following workaround resolves an issue with iverilog,
+                    # which does not properly parse specify timing paths that are not in
+                    # parentheses.  Easy to work around
+                    vlines = re.sub(r'\)[ \t]*=[ \t]*([01]:[01]:[01])[ \t]*;', r') = ( \1 ) ;', vfilter)
+                    print(vlines, file=ofile)
+                print('\n//--------EOF---------\n', file=ofile)
+
+        if do_compile_only == True:
+            print('Compile-only:  Removing individual verilog files')
+            for vfile in vlist:
+                if os.path.isfile(vfile):
+                    os.remove(vfile)
+                elif os.path.islink(vfile):
+                    os.unlink(vfile)
+    else:
+        print('Only one file (' + str(vlist) + ');  ignoring "compile" option.')
+
+#----------------------------------------------------------------------------
+# Remove redundant module entries from a verilog file.  "m2list" is a list of
+# module names gleaned from all previously read files using re.findall().
+# "mlist" is a list of all module names including those in "ntext".
+# The reason for doing this is that some verilog files may includes modules used
+# by all the files, and if included more than once, then iverilog complains.
+#----------------------------------------------------------------------------
+
+def remove_redundant_modules(ntext, mlist, m2list):
+    updated = ntext
+    for module in mlist:
+        # Determine the number of times the module appears in the text
+        if module in m2list:
+            # This module seen before outside of ntext, so remove all occurrances in ntext
+            new = re.sub(r'[ \t\n]+module[ \t]+' + module + '[ \t\n\(]+.*[ \t\n]endmodule', '\n', updated, flags=re.DOTALL)
+            updated = new
+
+        else:
+            n = len(re.findall(r'[ \t\n]module[ \t]+' + module + '[ \t\n\(]+.*[ \t\n]endmodule', updated, flags=re.DOTALL))
+            # This module defined more than once inside ntext, so remove all but one
+            # Optimization:  Just keep original text if n < 2
+            if n < 2:
+                continue
+
+            # Remove all but one
+            updated = re.sub(r'[ \t\n]+module[ \t]+' + module + '[ \t\n]+.*[ \t\n]endmodule', '\n', n - 1, updated, flags=re.IGNORECASE | re.DOTALL)
+    return updated
+
+#----------------------------------------------------------------------------
+# Given a destination directory holding individual LEF files of a number
+# of cells, create a single LEF library file named <alllibname> and place
+# it in the same directory.  This is done for the option "compile" if specified
+# for the "-lef" install.
+#----------------------------------------------------------------------------
+
+def create_lef_library(destlibdir, destlib, do_compile_only, excludelist):
+
+    alllibname = destlibdir + '/' + destlib + '.lef'
+    if os.path.isfile(alllibname):
+        os.remove(alllibname)
+
+    print('Diagnostic:  Creating consolidated LEF library ' + destlib + '.lef')
+    llist = glob.glob(destlibdir + '/*.lef')
+    if alllibname in llist:
+        llist.remove(alllibname)
+
+    # Create exclude list with glob-style matching using fnmatch
+    if len(llist) > 0:
+        llistnames = list(os.path.split(item)[1] for item in llist)
+        notllist = []
+        for exclude in excludelist:
+            notllist.extend(fnmatch.filter(llistnames, exclude))
+
+        # Apply exclude list
+        if len(notllist) > 0:
+            for file in llist[:]:
+                if os.path.split(file)[1] in notllist:
+                    llist.remove(file)
+
+    if len(llist) > 1:
+        print('New file is:  ' + alllibname)
+        with open(alllibname, 'w') as ofile:
+            headerdone = False
+            for lfile in llist:
+                with open(lfile, 'r') as ifile:
+                    # print('Adding ' + lfile + ' to library.')
+                    ltext = ifile.read()
+                    llines = ltext.splitlines()
+                    headerseen = False
+                    for lline in llines:
+                        if headerdone:
+                            if not headerseen:
+                                if not lline.startswith('MACRO'):
+                                    continue
+                                else:
+                                    headerseen = True
+                        print(lline, file=ofile)
+                    headerdone = True
+                print('#--------EOF---------\n', file=ofile)
+
+        if do_compile_only == True:
+            print('Compile-only:  Removing individual LEF files')
+            for lfile in llist:
+                if os.path.isfile(lfile):
+                    os.remove(lfile)
+            if newname:
+                if os.path.isfile(newname):
+                    os.remove(newname)
+    else:
+        print('Only one file (' + str(llist) + ');  ignoring "compile" option.')
+
+#----------------------------------------------------------------------------
+# Given a destination directory holding individual liberty files of a number
+# of cells, create a single liberty library file named <alllibname> and place
+# it in the same directory.  This is done for the option "compile" if specified
+# for the "-lib" install.
+#----------------------------------------------------------------------------
+
+# Warning:  This script is unfinished.  Needs to parse the library header
+# in each cell and generate a new library header combining the contents of
+# all cell headers.  Also:  The library name in the header needs to be
+# changed to the full library name.  Also:  There is no mechanism for
+# collecting all files belonging to a single process corner/temperature/
+# voltage.
+
+def create_lib_library(destlibdir, destlib, do_compile_only, excludelist):
+
+    alllibname = destlibdir + '/' + destlib + '.lib'
+    if os.path.isfile(alllibname):
+        os.remove(alllibname)
+
+    print('Diagnostic:  Creating consolidated liberty library ' + destlib + '.lib')
+
+    # Create exclude list with glob-style matching using fnmatch
+    if len(llist) > 0:
+        llistnames = list(os.path.split(item)[1] for item in llist)
+        notllist = []
+        for exclude in excludelist:
+            notllist.extend(fnmatch.filter(llistnames, exclude))
+
+        # Apply exclude list
+        if len(notllist) > 0:
+            for file in llist[:]:
+                if os.path.split(file)[1] in notllist:
+                    llist.remove(file)
+
+    if len(llist) > 1:
+        print('New file is:  ' + alllibname)
+        with open(alllibname, 'w') as ofile:
+            headerdone = False
+            for lfile in llist:
+                with open(lfile, 'r') as ifile:
+                    # print('Adding ' + lfile + ' to library.')
+                    ltext = ifile.read()
+                    llines = ltext.splitlines()
+                    headerseen = False
+                    for lline in llines:
+                        if headerdone:
+                            if not headerseen:
+                                if not lline.split()[0] == 'cell':
+                                    continue
+                                else:
+                                    headerseen = True
+                        print(lline, file=ofile)
+                    headerdone = True
+                print('/*--------EOF---------*/\n', file=ofile)
+
+        if do_compile_only == True:
+            print('Compile-only:  Removing individual LEF files')
+            for lfile in llist:
+                if os.path.isfile(lfile):
+                    os.remove(lfile)
+            if newname:
+                if os.path.isfile(newname):
+                    os.remove(newname)
+    else:
+        print('Only one file (' + str(llist) + ');  ignoring "compile" option.')
+
+#----------------------------------------------------------------------------
+# Given a destination directory holding individual GDS files of a number
+# of cells, create a single GDL library file named <alllibname> and place
+# it in the same directory.  This is done for the option "compile" if specified
+# for the "-gds" install.
+#----------------------------------------------------------------------------
+
+def create_gds_library(destlibdir, destlib, startup_script, do_compile_only, excludelist):
+
+    alllibname = destlibdir + '/' + destlib + '.gds'
+    if os.path.isfile(alllibname):
+        os.remove(alllibname)
+
+    print('Diagnostic:  Creating consolidated GDS library ' + destlib + '.gds')
+    glist = glob.glob(destlibdir + '/*.gds')
+    glist.extend(glob.glob(destlibdir + '/*.gdsii'))
+    glist.extend(glob.glob(destlibdir + '/*.gds2'))
+    if alllibname in glist:
+        glist.remove(alllibname)
+
+    # Create exclude list with glob-style matching using fnmatch
+    if len(glist) > 0:
+        glistnames = list(os.path.split(item)[1] for item in glist)
+        notglist = []
+        for exclude in excludelist:
+            notglist.extend(fnmatch.filter(glistnames, exclude))
+
+        # Apply exclude list
+        if len(notglist) > 0:
+            for file in glist[:]:
+                if os.path.split(file)[1] in notglist:
+                    glist.remove(file)
+
+    if len(glist) > 1:
+        print('New file is:  ' + alllibname)
+
+        if os.path.isfile(startup_script):
+            # If the symbolic link exists, remove it.
+            if os.path.isfile(destlibdir + '/.magicrc'):
+                os.remove(destlibdir + '/.magicrc')
+            os.symlink(startup_script, destlibdir + '/.magicrc')
+
+        # A GDS library is binary and requires handling in Magic
+        print('Creating magic generation script to generate GDS library.') 
+        with open(destlibdir + '/generate_magic.tcl', 'w') as ofile:
+            print('#!/usr/bin/env wish', file=ofile)
+            print('#--------------------------------------------', file=ofile)
+            print('# Script to generate .gds library from files   ', file=ofile)
+            print('#--------------------------------------------', file=ofile)
+            print('drc off', file=ofile)
+            print('gds readonly true', file=ofile)
+            print('gds flatten true', file=ofile)
+            print('gds rescale false', file=ofile)
+            print('tech unlock *', file=ofile)
+
+            for gdsfile in glist:
+                print('gds read ' + gdsfile, file=ofile)
+
+            print('puts stdout "Creating cell ' + destlib + '"', file=ofile)
+            print('load ' + destlib, file=ofile)
+            print('puts stdout "Adding cells to library"', file=ofile)
+            print('box values 0 0 0 0', file=ofile)
+            for gdsfile in glist:
+                gdsroot = os.path.split(gdsfile)[1]
+                gdsname = os.path.splitext(gdsroot)[0]
+                print('getcell ' + gdsname, file=ofile)
+                # Could properly make space for the cell here. . . 
+                print('box move e 200', file=ofile)
+                                
+            print('puts stdout "Writing GDS file ' + destlib + '"', file=ofile)
+            print('gds write ' + destlib, file=ofile)
+            print('puts stdout "Done."', file=ofile)
+            print('quit -noprompt', file=ofile)
+
+        # Run magic to read in the individual GDS files and
+        # write out the consolidated GDS library
+
+        print('Running magic to create GDS library.')
+        sys.stdout.flush()
+
+        mproc = subprocess.run(['magic', '-dnull', '-noconsole',
+			destlibdir + '/generate_magic.tcl'],
+			stdin = subprocess.DEVNULL,
+			stdout = subprocess.PIPE,
+			stderr = subprocess.PIPE, cwd = destlibdir,
+			universal_newlines = True)
+
+        if mproc.stdout:
+            for line in mproc.stdout.splitlines():
+                print(line)
+        if mproc.stderr:
+            print('Error message output from magic:')
+            for line in mproc.stderr.splitlines():
+                print(line)
+        if mproc.returncode != 0:
+            print('ERROR:  Magic exited with status ' + str(mproc.returncode))
+        if do_compile_only == True:
+            print('Compile-only:  Removing individual GDS files')
+            for gfile in glist:
+                if os.path.isfile(gfile):
+                    os.remove(gfile)
+            if newname:
+                if os.path.isfile(newname):
+                    os.remove(newname)
+    else:
+        print('Only one file (' + str(glist) + ');  ignoring "compile" option.')
+
+#----------------------------------------------------------------------------
+# Given a destination directory holding individual SPICE netlists of a number
+# of cells, create a single SPICE library file named <alllibname> and place
+# it in the same directory.  This is done for the option "compile" if specified
+# for the "-spice" install.
+#----------------------------------------------------------------------------
+
+def create_spice_library(destlibdir, destlib, spiext, do_compile_only, do_stub, excludelist):
+
+    fformat = 'CDL' if spiext == '.cdl' else 'SPICE'
+
+    allstubname = destlibdir + '/stub' + spiext
+    alllibname = destlibdir + '/' + destlib + spiext
+    if do_stub:
+        outputname = allstubname
+    else:
+        outputname = alllibname
+
+    print('Diagnostic:  Creating consolidated ' + fformat + ' library ' + outputname)
+
+    if os.path.isfile(outputname):
+        os.remove(outputname)
+
+    if fformat == 'CDL':
+        slist = glob.glob(destlibdir + '/*.cdl')
+    else:
+        # Sadly, there is no consensus on what a SPICE file extension should be.
+        slist = glob.glob(destlibdir + '/*.spc')
+        slist.extend(glob.glob(destlibdir + '/*.spice'))
+        slist.extend(glob.glob(destlibdir + '/*.spi'))
+        slist.extend(glob.glob(destlibdir + '/*.ckt'))
+
+    if alllibname in slist:
+        slist.remove(alllibname)
+
+    if allstubname in slist:
+        slist.remove(allstubname)
+
+    # Create exclude list with glob-style matching using fnmatch
+    if len(slist) > 0:
+        slistnames = list(os.path.split(item)[1] for item in slist)
+        notslist = []
+        for exclude in excludelist:
+            notslist.extend(fnmatch.filter(slistnames, exclude))
+
+        # Apply exclude list
+        if len(notslist) > 0:
+            for file in slist[:]:
+                if os.path.split(file)[1] in notslist:
+                    slist.remove(file)
+
+    if len(slist) > 1:
+        with open(outputname, 'w') as ofile:
+            allsubckts = []
+            for sfile in slist:
+                with open(sfile, 'r') as ifile:
+                    # print('Adding ' + sfile + ' to library.')
+                    stext = ifile.read()
+                    subckts = re.findall(r'\.subckt[ \t]+([^ \t\n]+)', stext, flags=re.IGNORECASE)
+                    sseen = list(item for item in subckts if item in allsubckts)
+                    allsubckts.extend(list(item for item in subckts if item not in allsubckts))
+                    sfilter = remove_redundant_subckts(stext, allsubckts, sseen)
+                    print(sfilter, file=ofile)
+                print('\n******* EOF\n', file=ofile)
+
+        if do_compile_only == True:
+            print('Compile-only:  Removing individual SPICE files')
+            for sfile in slist:
+                if os.path.isfile(sfile):
+                    os.remove(sfile)
+                elif os.path.islink(sfile):
+                    os.unlink(sfile)
+    else:
+        print('Only one file (' + str(slist) + ');  ignoring "compile" option.')
+
+#----------------------------------------------------------------------------
+# Remove redundant subcircuit entries from a SPICE or CDL netlist file.  "sseen"
+# is a list of subcircuit names gleaned from all previously read files using
+# re.findall(). "slist" is a list of subcircuits including those in "ntext".
+# If a subcircuit is defined outside of "ntext", then remove all occurrences in
+# "ntext".  Otherwise, if a subcircuit is defined more than once in "ntext",
+# remove all but one copy.  The reason for doing this is that some netlists will
+# include primitive device definitions used by all the standard cell subcircuits.
+#
+# It may be necessary to remove redundant .include statements and redundant .model
+# and/or .option statements as well.
+#----------------------------------------------------------------------------
+
+def remove_redundant_subckts(ntext, slist, sseen):
+    updated = ntext
+    for subckt in slist:
+        if subckt in sseen:
+            # Remove all occurrences of subckt
+            updated = re.sub(r'\n\.subckt[ \t]+' + subckt + '[ \t\n]+.*\n\.ends[ \t\n]+', '\n', updated, flags=re.IGNORECASE | re.DOTALL)
+
+        else:
+            # Determine the number of times the subcircuit appears in the text
+            n = len(re.findall(r'\n\.subckt[ \t]+' + subckt + '[ \t\n]+.*\n\.ends[ \t\n]+', updated, flags=re.IGNORECASE | re.DOTALL))
+            # Optimization:  Just keep original text if n < 2
+            if n < 2:
+                continue
+
+            # Remove all but one
+            updated = re.sub(r'\n\.subckt[ \t]+' + subckt + '[ \t\n]+.*\n\.ends[ \t\n]+', '\n', n - 1, updated, flags=re.IGNORECASE | re.DOTALL)
+    return updated
+
+#----------------------------------------------------------------------------
+# This is the main entry point for the foundry install script.
+#----------------------------------------------------------------------------
+
+if __name__ == '__main__':
+
+    if len(sys.argv) == 1:
+        print("No options given to foundry_install.py.")
+        usage()
+        sys.exit(0)
+    
+    optionlist = []
+    newopt = []
+
+    sourcedir = None
+    targetdir = None
+
+    ef_format = False
+    do_clean = False
+
+    have_lef = False
+    have_techlef = False
+    have_lefanno = False
+    have_gds = False
+    have_spice = False
+    have_cdl = False
+    have_verilog = False
+    have_lib = False
+
+    # Break arguments into groups where the first word begins with "-".
+    # All following words not beginning with "-" are appended to the
+    # same list (optionlist).  Then each optionlist is processed.
+    # Note that the first entry in optionlist has the '-' removed.
+
+    for option in sys.argv[1:]:
+        if option.find('-', 0) == 0:
+            if newopt != []:
+                optionlist.append(newopt)
+                newopt = []
+            newopt.append(option[1:])
+        else:
+            newopt.append(option)
+
+    if newopt != []:
+        optionlist.append(newopt)
+
+    # Pull library names from optionlist
+    libraries = []
+    for option in optionlist[:]:
+        if option[0] == 'library':
+            optionlist.remove(option)
+            libraries.append(option[1:]) 
+
+    # Check for option "ef_format" or "std_format" or "clean"
+    for option in optionlist[:]:
+        if option[0] == 'ef_naming' or option[0] == 'ef_names' or option[0] == 'ef_format':
+            optionlist.remove(option)
+            ef_format = True
+        elif option[0] == 'std_naming' or option[0] == 'std_names' or option[0] == 'std_format':
+            optionlist.remove(option)
+            ef_format = False
+        elif option[0] == 'clean':
+            do_clean = True
+
+    # Check for options "source" and "target"
+    for option in optionlist[:]:
+        if option[0] == 'source':
+            optionlist.remove(option)
+            sourcedir = option[1]
+        elif option[0] == 'target':
+            optionlist.remove(option)
+            targetdir = option[1]
+
+    if not targetdir:
+        print("No target directory specified.  Exiting.")
+        sys.exit(1)
+
+    # Take the target PDK name from the target path last component
+    pdkname = os.path.split(targetdir)[1]
+
+    # If targetdir (the staging area) exists, make sure it's empty.
+
+    if os.path.isdir(targetdir):
+        # Error if targetdir exists but is not writeable
+        if not os.access(targetdir, os.W_OK):
+            print("Target installation directory " + targetdir + " is not writable.")
+            sys.exit(1)
+
+        # Clear out the staging directory if specified
+        if do_clean:
+            shutil.rmtree(targetdir)
+    elif os.path.exists(targetdir):
+        print("Target installation directory " + targetdir + " is not a directory.")
+        sys.exit(1)
+
+    # Error if no source or dest specified unless "-clean" was specified
+    if not sourcedir:
+        if do_clean:
+            print("Done removing staging area.")
+            sys.exit(0)
+        else:
+            print("No source directory specified.  Exiting.")
+            sys.exit(1)
+
+    # Create the target directory
+    os.makedirs(targetdir, exist_ok=True)
+
+    #----------------------------------------------------------------
+    # Installation part 1:  Install files into the staging directory
+    #----------------------------------------------------------------
+
+    # Diagnostic
+    print("Installing in target (staging) directory " + targetdir)
+
+    # Create the top-level directories
+
+    os.makedirs(targetdir + '/libs.tech', exist_ok=True)
+    os.makedirs(targetdir + '/libs.ref', exist_ok=True)
+
+    # Path to magic techfile depends on ef_format
+
+    if ef_format == True:
+        mag_current = '/libs.tech/magic/current/'
+    else:
+        mag_current = '/libs.tech/magic/'
+
+    # Check for magic version and set flag if it does not exist or if
+    # it has the wrong version.
+    have_mag_8_2 = False
+    try:
+        mproc = subprocess.run(['magic', '--version'],
+		stdout = subprocess.PIPE,
+		stderr = subprocess.PIPE,
+		universal_newlines = True)
+        if mproc.stdout:
+            mag_version = mproc.stdout.splitlines()[0]
+            mag_version_info = mag_version.split('.')
+            try:
+                if int(mag_version_info[0]) > 8:
+                    have_mag_8_2 = True
+                elif int(mag_version_info[0]) == 8:
+                    if int(mag_version_info[1]) >= 2:
+                        have_mag_8_2 = True
+                        print('Magic version 8.2 available on the system.')
+            except ValueError:
+                print('Error: "magic --version" did not return valid version number.')
+    except FileNotFoundError:
+        print('Error: Failed to find executable for magic in standard search path.')
+
+    if not have_mag_8_2:
+        print('WARNING:  Magic version 8.2 cannot be executed from the standard executable search path.')
+        print('Please install or correct the search path.')
+        print('Magic database files will not be created, and other missing file formats may not be generated.')
+
+    # Populate any targets that do not specify a library, or where the library is
+    # specified as "primitive".  
+
+    # Populate the techLEF and SPICE models, if specified.  Also, this section can add
+    # to any directory in libs.tech/ as given by the option;  e.g., "-ngspice" will
+    # install into libs.tech/ngspice/.
+
+    if libraries == [] or 'primitive' in libraries[0]:
+
+        for option in optionlist[:]:
+
+            # Legacy behavior is to put libs.tech models and techLEF files in
+            # the same grouping as files for the primdev library (which go in
+            # libs.ref).  Current behavior is to put all libs.tech files in
+            # a grouping with no library, with unrestricted ability to write
+            # into any subdirectory of libs.tech/.  Therefore, need to restrict
+            # legacy use to just 'techlef' and 'models'.
+
+            if len(libraries) > 0 and 'primitive' in libraries[0]:
+                if option[0] != 'techlef' and option[0] != 'techLEF' and option[0] != 'models':
+                    continue
+  
+            # Normally technology LEF files are associated with IP libraries.
+            # However, if no library is specified or the library is 'primitive'
+            # (legacy behavior), then put in the techLEF directory with no subdirectory.
+
+            filter_scripts = []
+            if option[0] == 'techlef' or option[0] == 'techLEF':
+                for item in option:
+                    if item.split('=')[0] == 'filter':
+                        filter_scripts.append(item.split('=')[1])
+                        break
+
+                if ef_format:
+                    techlefdir = targetdir + '/libs.ref/' + 'techLEF'
+                else:
+                    techlefdir = targetdir + '/libs.tech/lef'
+
+                os.makedirs(techlefdir, exist_ok=True)
+                # All techlef files should be copied, so use "glob" on the wildcards
+                techlist = glob.glob(substitute(sourcedir + '/' + option[1], None))
+
+                for lefname in techlist:
+                    leffile = os.path.split(lefname)[1]
+                    targname = techlefdir + '/' + leffile
+
+                    if os.path.isfile(lefname):
+                        shutil.copy(lefname, targname)
+                    else:
+                        shutil.copytree(lefname, targname)
+
+                    for filter_script in filter_scripts:
+                        # Apply filter script to all files in the target directory
+                        tfilter(targname, filter_script)
+
+                optionlist.remove(option)
+
+            # All remaining options will refer to specific tools (e.g., -ngspice, -magic)
+            # although generic names (.e.g, -models) are acceptable if the tools know
+            # where to find the files.  Currently, most tools have their own formats
+            # and standards for setup, and so generally each install directory will be
+            # unique to one EDA tool.
+
+            else:
+                filter_scripts = []
+                for item in option:
+                    if item.split('=')[0] == 'filter':
+                        filter_scripts.append(item.split('=')[1])
+                        break
+
+                print('Diagnostic:  installing ' + option[0] + '.')
+                tooldir = targetdir + '/libs.tech/' + option[0]
+                os.makedirs(tooldir, exist_ok=True)
+
+                # All files should be linked or copied, so use "glob" on
+                # the wildcards.  Copy each file and recursively copy each
+                # directory.
+                toollist = glob.glob(substitute(sourcedir + '/' + option[1], None))
+
+                for toolname in toollist:
+                    toolfile = os.path.split(toolname)[1]
+                    targname = tooldir + '/' + toolfile
+
+                    if os.path.isdir(toolname):
+                        # Remove any existing directory, and its contents
+                        if os.path.isdir(targname):
+                            shutil.rmtree(targname)
+                        os.makedirs(targname)
+    
+                        # Recursively find and copy or link the whole directory
+                        # tree from this point.
+
+                        alltoollist = glob.glob(toolname + '/**', recursive=True)
+                        commonpart = os.path.commonpath(alltoollist)
+                        for subtoolname in alltoollist:
+                            if os.path.isdir(subtoolname):
+                                continue
+                            # Get the path part that is not common between toollist and
+                            # alltoollist.
+                            subpart = os.path.relpath(subtoolname, commonpart)
+                            subtargname = targname + '/' + subpart
+                            os.makedirs(os.path.split(subtargname)[0], exist_ok=True)
+
+                            if os.path.isfile(subtoolname):
+                                shutil.copy(subtoolname, subtargname)
+                            else:
+                                shutil.copytree(subtoolname, subtargname)
+
+                            for filter_script in filter_scripts:
+                                # Apply filter script to all files in the target directory
+                                tfilter(subtargname, filter_script)
+
+                    else:
+                        # Remove any existing file
+                        if os.path.isfile(targname):
+                            os.remove(targname)
+                        elif os.path.isdir(targname):
+                            shutil.rmtree(targname)
+
+                        if os.path.isfile(toolname):
+                            shutil.copy(toolname, targname)
+                        else:
+                            shutil.copytree(toolname, targname)
+
+                        for filter_script in filter_scripts:
+                            # Apply filter script to all files in the target directory
+                            tfilter(targname, filter_script)
+
+                optionlist.remove(option)
+
+    # Do an initial pass through all of the options and determine what is being
+    # installed, so that we know in advance which file formats are missing and
+    # need to be generated.
+
+    for option in optionlist[:]:
+        if option[0] == 'lef':
+            have_lef = True
+        if option[0] == 'techlef' or option[0] == 'techLEF':
+            have_techlef = True
+        elif option[0] == 'gds':
+            have_gds = True
+        elif option[0] == 'spice' or option[0] == 'spi':
+            have_spice = True
+        elif option[0] == 'cdl':
+            have_cdl = True
+        elif option[0] == 'verilog':
+            have_verilog = True
+        elif option[0] == 'lib' or option[0] == 'liberty':
+            have_lib = True
+
+    # The remaining options in optionlist should all be types like 'lef' or 'liberty'
+    # and there should be a corresponding library list specified by '-library'
+
+    for option in optionlist[:]:
+
+        # Ignore if no library list---should have been taken care of above.
+        if libraries == []:
+            break
+
+        # Diagnostic
+        print("Install option: " + str(option[0]))
+
+        # For ef_format:  always make techlef -> techLEF and spice -> spi
+
+        if ef_format:
+            if option[0] == 'techlef':
+                option[0] = 'techLEF'
+            elif option[0] == 'spice':
+                option[0] = 'spi'
+
+            destdir = targetdir + '/libs.ref/' + option[0]
+            os.makedirs(destdir, exist_ok=True)
+
+        # If the option is followed by the keyword "up" and a number, then
+        # the source should be copied (or linked) from <number> levels up
+        # in the hierarchy (see below).
+
+        if 'up' in option:
+            uparg = option.index('up') 
+            try:
+                hier_up = int(option[uparg + 1])
+            except:
+                print("Non-numeric option to 'up': " + option[uparg + 1])
+                print("Ignoring 'up' option.")
+                hier_up = 0
+        else:
+            hier_up = 0
+
+        filter_scripts = []
+        for item in option:
+            if item.split('=')[0] == 'filter':
+                filter_scripts.append(item.split('=')[1])
+                break
+
+        # Option 'stub' applies to netlists ('cdl' or 'spice') and generates
+        # a file with only stub entries.
+        do_stub = 'stub' in option
+
+        # Option 'compile' is a standalone keyword ('comp' may be used).
+        do_compile = 'compile' in option or 'comp' in option
+        do_compile_only = 'compile-only' in option or 'comp-only' in option
+ 
+        # Option 'nospecify' is a standalone keyword ('nospec' may be used).
+        do_remove_spec = 'nospecify' in option or 'nospec' in option
+
+        # Option 'exclude' has an argument
+        try:
+            excludelist = list(item.split('=')[1].split(',') for item in option if item.startswith('excl'))[0]
+        except IndexError:
+            excludelist = []
+        else:
+            print('Excluding files: ' + (',').join(excludelist))
+
+        # Option 'rename' has an argument
+        try:
+            newname = list(item.split('=')[1] for item in option if item.startswith('rename'))[0]
+        except IndexError:
+            newname = None
+        else:
+            print('Renaming file to: ' + newname)
+
+        # 'anno' may be specified for LEF, in which case the LEF is used only
+        # to annotate GDS and is not itself installed;  this allows LEF to
+        # be generated from Magic and avoids quirky use of obstruction layers.
+        have_lefanno = True if 'annotate' in option or 'anno' in option else False
+        if have_lefanno: 
+            if option[0] != 'lef':
+                print("Warning: 'annotate' option specified outside of -lef.  Ignoring.")
+            else:
+                # Mark as NOT having LEF since we want to use it only for annotation.
+                have_lef = False
+
+        # For each library, create the library subdirectory
+        for library in libraries:
+            if len(library) == 3:
+                destlib = library[2]
+            else:
+                destlib = library[1]
+
+            if ef_format:
+                destlibdir = destdir + '/' + destlib
+            else:
+                destdir = targetdir + '/libs.ref/' + destlib + '/' + option[0]
+                destlibdir = destdir
+
+            os.makedirs(destlibdir, exist_ok=True)
+
+            # Populate the library subdirectory
+            # Parse the option and replace each '/*/' with the library name,
+            # and check if it is a valid directory name.  Then glob the
+            # resulting option name.  Warning:  This assumes that all
+            # occurences of the text '/*/' match a library name.  It should
+            # be possible to wild-card the directory name in such a way that
+            # this is always true.
+
+            testpath = substitute(sourcedir + '/' + option[1], library[1])
+            liblist = glob.glob(testpath)
+
+            # Create a file "sources.txt" (or append to it if it exists)
+            # and add the source directory name so that the staging install
+            # script can know where the files came from.
+
+            with open(destlibdir + '/sources.txt', 'a') as ofile:
+                print(testpath, file=ofile)
+
+            # Create exclude list with glob-style matching using fnmatch
+            if len(liblist) > 0:
+                liblistnames = list(os.path.split(item)[1] for item in liblist)
+                notliblist = []
+                for exclude in excludelist:
+                    notliblist.extend(fnmatch.filter(liblistnames, exclude))
+
+                # Apply exclude list
+                if len(notliblist) > 0:
+                    for file in liblist[:]:
+                        if os.path.split(file)[1] in notliblist:
+                            liblist.remove(file)
+
+                if len(excludelist) > 0 and len(notliblist) == 0:
+                    print('Warning:  Nothing from the exclude list found in sources.')
+                    print('excludelist = ' + str(excludelist))
+                    print('destlibdir = ' + destlibdir)
+
+            # Diagnostic
+            print('Collecting files from ' + testpath)
+            print('Files to install:')
+            if len(liblist) < 10:
+                for item in liblist:
+                    print('   ' + item)
+            else:
+                for item in liblist[0:4]:
+                    print('   ' + item)
+                print('   .')
+                print('   .')
+                print('   .')
+                for item in liblist[-6:-1]:
+                    print('   ' + item)
+                print('(' + str(len(liblist)) + ' files total)')
+
+            for libname in liblist:
+                # Note that there may be a hierarchy to the files in option[1],
+                # say for liberty timing files under different conditions, so
+                # make sure directories have been created as needed.
+
+                libfile = os.path.split(libname)[1]
+                libfilepath = os.path.split(libname)[0]
+                destpathcomp = []
+                for i in range(hier_up):
+                    destpathcomp.append('/' + os.path.split(libfilepath)[1])
+                    libfilepath = os.path.split(libfilepath)[0]
+                destpathcomp.reverse()
+                destpath = ''.join(destpathcomp)
+
+                if newname:
+                    if len(liblist) == 1:
+                        destfile = newname
+                    else:
+                        if not do_compile and not do_compile_only:
+                            print('Error:  rename specified but more than one file found!')
+                        destfile = libfile
+                else:
+                    destfile = libfile
+
+                targname = destlibdir + destpath + '/' + destfile
+
+                # NOTE:  When using "up" with link_from, could just make
+                # destpath itself a symbolic link;  this way is more flexible
+                # but adds one symbolic link per file.
+
+                if destpath != '':
+                    if not os.path.isdir(destlibdir + destpath):
+                        os.makedirs(destlibdir + destpath, exist_ok=True)
+
+                # Remove any existing file
+                if os.path.isfile(targname):
+                    os.remove(targname)
+                elif os.path.isdir(targname):
+                    shutil.rmtree(targname)
+
+                # NOTE:  Diagnostic, probably much too much output.
+                print('   Install:' + libname + ' to ' + targname)
+                if os.path.isfile(libname):
+                    shutil.copy(libname, targname)
+                else:
+                    shutil.copytree(libname, targname)
+
+                # File filtering options:  Two options 'stub' and 'nospec' are
+                # handled by scripts in ../common/.  Custom filters can also be
+                # specified.
+
+                local_filter_scripts = filter_scripts[:]
+
+                if option[0] == 'verilog':
+                    # Internally handle syntactical issues with verilog and iverilog
+                    vfilter(targname)
+
+                    if do_remove_spec:
+                        scriptdir = os.path.split(os.getcwd())[0] + '/common'
+                        local_filter_scripts.append(scriptdir + '/remove_specify.py')
+
+                elif option[0] == 'cdl' or option[0] == 'spi' or option[0] == 'spice':
+                    if do_stub:
+                        scriptdir = os.path.split(os.getcwd())[0] + '/common'
+                        local_filter_scripts.append(scriptdir + '/makestub.py')
+
+                for filter_script in local_filter_scripts:
+                    # Apply filter script to all files in the target directory
+                    tfilter(targname, filter_script)
+
+            if do_compile == True or do_compile_only == True:
+                # NOTE:  The purpose of "rename" is to put a destlib-named
+                # library elsewhere so that it can be merged with another
+                # library into a compiled <destlib>.<ext>
+
+                compname = destlib
+                    
+                # To do:  Make this compatible with linking from another PDK.
+
+                if option[0] == 'verilog':
+                    # If there is not a single file with all verilog cells in it,
+                    # then compile one, because one does not want to have to have
+                    # an include line for every single cell used in a design.
+
+                    create_verilog_library(destlibdir, compname, do_compile_only, do_stub, excludelist)
+
+                elif option[0] == 'gds' and have_mag_8_2:
+                    # If there is not a single file with all GDS cells in it,
+                    # then compile one.
+
+                    # Link to the PDK magic startup file from the target directory
+                    startup_script = targetdir + mag_current + pdkname + '-F.magicrc'
+                    if not os.path.isfile(startup_script):
+                        startup_script = targetdir + mag_current + pdkname + '.magicrc'
+                    create_gds_library(destlibdir, compname, startup_script, do_compile_only, excludelist)
+
+                elif option[0] == 'liberty' or option[0] == 'lib':
+                    # If there is not a single file with all liberty cells in it,
+                    # then compile one, because one does not want to have to have
+                    # an include line for every single cell used in a design.
+
+                    create_lib_library(destlibdir, compname, do_compile_only, excludelist)
+
+                elif option[0] == 'spice' or option[0] == 'spi':
+                    # If there is not a single file with all SPICE subcircuits in it,
+                    # then compile one, because one does not want to have to have
+                    # an include line for every single cell used in a design.
+
+                    spiext = '.spice' if not ef_format else '.spi'
+                    create_spice_library(destlibdir, compname, spiext, do_compile_only, do_stub, excludelist)
+                    if do_compile_only == True:
+                        if newname:
+                            if os.path.isfile(newname):
+                                os.remove(newname)
+
+                elif option[0] == 'cdl':
+                    # If there is not a single file with all CDL subcircuits in it,
+                    # then compile one, because one does not want to have to have
+                    # an include line for every single cell used in a design.
+
+                    create_spice_library(destlibdir, compname, '.cdl', do_compile_only, do_stub, excludelist)
+                    if do_compile_only == True:
+                        if newname:
+                            if os.path.isfile(newname):
+                                os.remove(newname)
+
+                elif option[0] == 'lef':
+                    # If there is not a single file with all LEF cells in it,
+                    # then compile one, because one does not want to have to have
+                    # an include line for every single cell used in a design.
+
+                    create_lef_library(destlibdir, compname, do_compile_only, excludelist)
+
+        # Find any libraries/options marked as "privileged" (or "private") and
+        # move the files from libs.tech or libs.ref to libs.priv, leaving a
+        # symbolic link in the original location.  Do this during the initial
+        # install so that options following in the list can add files to the
+        # non-privileged equivalent directory path.
+
+        if 'priv' in option or 'privileged' in option or 'private' in option:
+
+            # Diagnostic
+            print("Install option: " + str(option[0]))
+
+            if ef_format == True:
+                os.makedirs(targetdir + '/libs.priv', exist_ok=True)
+
+            for library in libraries:
+                if len(library) == 3:
+                    destlib = library[2]
+                else:
+                    destlib = library[1]
+
+                if ef_format:
+                    srclibdir = targetdir + '/libs.ref/' + option[0] + '/' + destlib
+                    destlibdir = targetdir + '/libs.priv/' + option[0] + '/' + destlib
+                else:
+                    srclibdir = targetdir + '/libs.ref/' + destlib + '/' + option[0]
+                    destlibdir = targetdir + '/libs.priv/' + destlib + '/' + option[0]
+
+                if not os.path.exists(destlibdir):
+                    os.makedirs(destlibdir)
+
+                print('Moving files in ' + srclibdir + ' to privileged space.')
+                filelist = os.listdir(srclibdir)
+                for file in filelist:
+                    srcfile = srclibdir + '/' + file
+                    destfile = destlibdir + '/' + file
+                    if os.path.isfile(destfile):
+                        os.remove(destfile)
+                    elif os.path.isdir(destfile):
+                        shutil.rmtree(destfile)
+
+                    if os.path.isfile(srcfile):
+                        shutil.copy(srcfile, destfile)
+                        os.remove(srcfile)
+                    else:
+                        shutil.copytree(srcfile, destfile)
+                        shutil.rmtree(srcfile)
+
+    print("Completed installation of vendor files.")
+
+    #----------------------------------------------------------------
+    # Installation part 2:  Generate derived file formats
+    #----------------------------------------------------------------
+
+    # Now for the harder part.  If GDS and/or LEF databases were specified,
+    # then migrate them to magic (.mag files in layout/ or abstract/).
+
+    ignorelist = []
+    do_cdl_scaleu  = False
+    no_cdl_convert = False
+    no_gds_convert = False
+    no_lef_convert = False
+    cdl_compile_only = False
+
+    cdl_exclude = []
+    lef_exclude = []
+    gds_exclude = []
+    spice_exclude = []
+    verilog_exclude = []
+
+    cdl_reflib = '/libs.ref/'
+    gds_reflib = '/libs.ref/'
+    lef_reflib = '/libs.ref/'
+
+    for option in optionlist[:]:
+        if option[0] == 'cdl':
+            # Option 'scaleu' is a standalone keyword
+            do_cdl_scaleu = 'scaleu' in option
+
+            # Option 'ignore' has arguments after '='
+            for item in option:
+                if item.split('=')[0] == 'ignore':
+                    ignorelist = item.split('=')[1].split(',')
+
+	# Option 'noconvert' is a standalone keyword.
+        if 'noconvert' in option:
+            if option[0] == 'cdl':
+                no_cdl_convert = True
+            elif option[0] == 'gds':
+                no_gds_convert = True
+            elif option[0] == 'lef':
+                no_lef_convert = True
+
+        # Option 'privileged' is a standalone keyword.
+        if 'priv' in option or 'privileged' in option or 'private' in option:
+            if option[0] == 'cdl':
+                cdl_reflib = '/libs.priv/'
+            elif option[0] == 'gds':
+                gds_reflib = '/libs.priv/'
+            elif option[0] == 'lef':
+                lef_reflib = '/libs.priv/'
+
+        # If CDL is marked 'compile-only' then CDL should only convert the
+        # compiled file to SPICE if conversion is needed.
+        if 'compile-only' in option:
+            if option[0] == 'cdl':
+                cdl_compile_only = True
+
+        # Find exclude list for any option
+        for item in option:
+            if item.split('=')[0] == 'exclude':
+                exclude_list = item.split('=')[1].split(',')
+                if option[0] == 'cdl':
+                    cdl_exclude = exclude_list
+                elif option[0] == 'lef':
+                    lef_exclude = exclude_list
+                elif option[0] == 'gds':
+                    gds_exclude = exclude_list
+                elif option[0] == 'spi' or option[0] == 'spice':
+                    spice_exclude = exclude_list
+                elif option[0] == 'verilog':
+                    verilog_exclude = exclude_list
+ 
+    devlist = []
+    pdklibrary = None
+
+    if have_gds and not no_gds_convert:
+        print("Migrating GDS files to layout.")
+
+        if ef_format:
+            destdir = targetdir + gds_reflib + 'mag'
+            srcdir = targetdir + gds_reflib + 'gds'
+            vdir = targetdir + '/libs.ref/' + 'verilog'
+            cdir = targetdir + cdl_reflib + 'cdl'
+            sdir = targetdir + cdl_reflib + 'spi'
+
+            os.makedirs(destdir, exist_ok=True)
+
+        # For each library, create the library subdirectory
+        for library in libraries:
+            if len(library) == 3:
+                destlib = library[2]
+            else:
+                destlib = library[1]
+
+            if ef_format:
+                destlibdir = destdir + '/' + destlib
+                srclibdir = srcdir + '/' + destlib
+                vlibdir = vdir + '/' + destlib
+                clibdir = cdir + '/' + destlib
+                slibdir = sdir + '/' + destlib
+            else:
+                destdir = targetdir + gds_reflib + destlib + '/mag'
+                srcdir = targetdir + gds_reflib + destlib + '/gds'
+                vdir = targetdir + '/libs.ref/' + destlib + '/verilog'
+                cdir = targetdir + cdl_reflib + destlib + '/cdl'
+                sdir = targetdir + cdl_reflib + destlib + '/spice'
+                destlibdir = destdir
+                srclibdir = srcdir
+                vlibdir = vdir
+                clibdir = cdir
+                slibdir = sdir
+
+            os.makedirs(destlibdir, exist_ok=True)
+
+            # For primitive devices, check the PDK script and find the name
+            # of the library and get a list of supported devices.
+
+            if library[0] == 'primitive':
+                pdkscript = targetdir + mag_current + pdkname + '.tcl'
+                print('Searching for supported devices in PDK script ' + pdkscript + '.')
+
+                if os.path.isfile(pdkscript):
+                    librex = re.compile('^[ \t]*set[ \t]+PDKNAMESPACE[ \t]+([^ \t]+)$')
+                    devrex = re.compile('^[ \t]*proc[ \t]+([^ :\t]+)::([^ \t_]+)_defaults')
+                    fixrex = re.compile('^[ \t]*return[ \t]+\[([^ :\t]+)::fixed_draw[ \t]+([^ \t]+)[ \t]+')
+                    devlist = []
+                    fixedlist = []
+                    with open(pdkscript, 'r') as ifile:
+                        scripttext = ifile.read().splitlines()
+                        for line in scripttext:
+                            lmatch = librex.match(line)
+                            if lmatch:
+                                pdklibrary = lmatch.group(1)
+                            dmatch = devrex.match(line)
+                            if dmatch:
+                                if dmatch.group(1) == pdklibrary:
+                                    devlist.append(dmatch.group(2))
+                            fmatch = fixrex.match(line)
+                            if fmatch:
+                                if fmatch.group(1) == pdklibrary:
+                                    fixedlist.append(fmatch.group(2))
+
+                # Diagnostic
+                print("PDK library is " + str(pdklibrary))
+
+            # Link to the PDK magic startup file from the target directory
+            # If there is no -F version then look for one without -F (open source PDK)
+            startup_script = targetdir + mag_current + pdkname + '-F.magicrc'
+            if not os.path.isfile(startup_script):
+                startup_script = targetdir + mag_current + pdkname + '.magicrc'
+
+            if have_mag_8_2 and os.path.isfile(startup_script):
+                # If the symbolic link exists, remove it.
+                if os.path.isfile(destlibdir + '/.magicrc'):
+                    os.remove(destlibdir + '/.magicrc')
+                os.symlink(startup_script, destlibdir + '/.magicrc')
+ 
+                # Find GDS file names in the source
+                print('Getting GDS file list from ' + srclibdir + '.')
+                gdsfilesraw = os.listdir(srclibdir)
+                gdsfiles = []
+                for gdsfile in gdsfilesraw:
+                    gdsext = os.path.splitext(gdsfile)[1].lower()
+                    if gdsext == '.gds' or gdsext == '.gdsii' or gdsext == '.gds2':
+                        gdsfiles.append(gdsfile)
+
+                # Create exclude list with glob-style matching using fnmatch
+                if len(gdsfiles) > 0:
+                    gdsnames = list(os.path.split(item)[1] for item in gdsfiles)
+                    notgdsnames = []
+                    for exclude in gds_exclude:
+                        notgdsnames.extend(fnmatch.filter(gdsnames, exclude))
+
+                    # Apply exclude list
+                    if len(notgdsnames) > 0:
+                        for file in gdsfiles[:]:
+                            if os.path.split(file)[1] in notgdsnames:
+                                gdsfiles.remove(file)
+
+                # Generate a script called "generate_magic.tcl" and leave it in
+                # the target directory.  Use it as input to magic to create the
+                # .mag files from the database.
+
+                print('Creating magic generation script to generate magic database files.') 
+
+                with open(destlibdir + '/generate_magic.tcl', 'w') as ofile:
+                    print('#!/usr/bin/env wish', file=ofile)
+                    print('#--------------------------------------------', file=ofile)
+                    print('# Script to generate .mag files from .gds    ', file=ofile)
+                    print('#--------------------------------------------', file=ofile)
+                    print('gds readonly true', file=ofile)
+                    print('gds flatten true', file=ofile)
+                    print('gds rescale false', file=ofile)
+                    print('tech unlock *', file=ofile)
+
+                    for gdsfile in gdsfiles:
+                        # Note:  DO NOT use a relative path here.
+                        print('gds read ' + srclibdir + '/' + gdsfile, file=ofile)
+
+                    # Make sure properties include the Tcl generated cell
+                    # information from the PDK script
+
+                    if pdklibrary:
+                        tclfixedlist = '{' + ' '.join(fixedlist) + '}'
+                        print('set devlist ' + tclfixedlist, file=ofile)
+                        print('set topcell [lindex [cellname list top] 0]',
+				    file=ofile)
+
+                        print('foreach cellname $devlist {', file=ofile)
+                        print('    load $cellname', file=ofile)
+                        print('    property gencell $cellname', file=ofile)
+                        print('    property parameter m=1', file=ofile)
+                        print('    property library ' + pdklibrary, file=ofile)
+                        print('}', file=ofile)
+                        print('load $topcell', file=ofile)
+
+                    print('cellname delete \(UNNAMED\)', file=ofile)
+                    print('writeall force', file=ofile)
+
+                    leffiles = []
+                    lefmacros = []
+                    if have_lefanno:
+                        # Find LEF file names in the source
+                        if ef_format:
+                            lefsrcdir = targetdir + lef_reflib + 'lefanno'
+                            lefsrclibdir = lefsrcdir + '/' + destlib
+                        else:
+                            lefsrcdir = targetdir + lef_reflib + destlib + '/lefanno'
+                            lefsrclibdir = lefsrcdir
+
+                        leffiles = os.listdir(lefsrclibdir)
+                        leffiles = list(item for item in leffiles if os.path.splitext(item)[1] == '.lef')
+                        # Get list of abstract views to make from LEF macros
+                        for leffile in leffiles:
+                            with open(leffile, 'r') as ifile:
+                                ltext = ifile.read()
+                                llines = ltext.splitlines()
+                                for lline in llines:
+                                    ltok = re.split(' |\t|\(', lline)
+                                    if ltok[0] == 'MACRO':
+                                        lefmacros.append(ltok[1])
+
+                        # Create exclude list with glob-style matching using fnmatch
+                        if len(lefmacros) > 0:
+                            lefnames = list(os.path.split(item)[1] for item in lefmacros)
+                            notlefnames = []
+                            for exclude in lef_exclude:
+                                notlefnames.extend(fnmatch.filter(lefnames, exclude))
+
+                            # Apply exclude list
+                            if len(notlefnames) > 0:
+                                for file in lefmacros[:]:
+                                    if os.path.split(file)[1] in notlefnames:
+                                        lefmacros.remove(file)
+
+                    elif have_verilog and os.path.isdir(vlibdir):
+                        # Get list of abstract views to make from verilog modules
+                        vfiles = os.listdir(vlibdir)
+                        vfiles = list(item for item in vfiles if os.path.splitext(item)[1] == '.v')
+                        for vfile in vfiles:
+                            with open(vlibdir + '/' + vfile, 'r') as ifile:
+                                vtext = ifile.read()
+                                vlines = vtext.splitlines()
+                                for vline in vlines:
+                                    vtok = re.split(' |\t|\(', vline)
+                                    try:
+                                        if vtok[0] == 'module':
+                                            if vtok[1] not in lefmacros:
+                                                lefmacros.append(vtok[1])
+                                    except:
+                                        pass
+
+                        # Create exclude list with glob-style matching using fnmatch
+                        if len(lefmacros) > 0:
+                            lefnames = list(os.path.split(item)[1] for item in lefmacros)
+                            notlefnames = []
+                            for exclude in verilog_exclude:
+                                notlefnames.extend(fnmatch.filter(lefnames, exclude))
+
+                            # Apply exclude list
+                            if len(notlefnames) > 0:
+                                for file in lefmacros[:]:
+                                    if os.path.split(file)[1] in notlefnames:
+                                        lefmacros.remove(file)
+
+                    elif have_cdl and os.path.isdir(clibdir):
+                        # Get list of abstract views to make from CDL subcircuits
+                        cfiles = os.listdir(clibdir)
+                        cfiles = list(item for item in cfiles if os.path.splitext(item)[1] == '.cdl')
+                        for cfile in cfiles:
+                            with open(clibdir + '/' + cfile, 'r') as ifile:
+                                ctext = ifile.read()
+                                clines = ctext.splitlines()
+                                for cline in clines:
+                                    ctok = cline.split()
+                                    try:
+                                        if ctok[0].lower() == '.subckt':
+                                            if ctok[1] not in lefmacros:
+                                                lefmacros.append(ctok[1])
+                                    except:
+                                        pass
+
+                        # Create exclude list with glob-style matching using fnmatch
+                        if len(lefmacros) > 0:
+                            lefnames = list(os.path.split(item)[1] for item in lefmacros)
+                            notlefnames = []
+                            for exclude in cdl_exclude:
+                                notlefnames.extend(fnmatch.filter(lefnames, exclude))
+
+                            # Apply exclude list
+                            if len(notlefnames) > 0:
+                                for file in lefmacros[:]:
+                                    if os.path.split(file)[1] in notlefnames:
+                                        lefmacros.remove(file)
+
+                    elif have_spice and os.path.isdir(slibdir):
+                        # Get list of abstract views to make from SPICE subcircuits
+                        sfiles = os.listdir(slibdir)
+                        sfiles = list(item for item in sfiles)
+                        for sfile in sfiles:
+                            with open(slibdir + '/' + sfile, 'r') as ifile:
+                                stext = ifile.read()
+                                slines = stext.splitlines()
+                                for sline in slines:
+                                    stok = sline.split()
+                                    try:
+                                        if stok[0].lower() == '.subckt':
+                                            if stok[1] not in lefmacros:
+                                                lefmacros.append(stok[1])
+                                    except:
+                                        pass
+
+                        # Create exclude list with glob-style matching using fnmatch
+                        if len(lefmacros) > 0:
+                            lefnames = list(os.path.split(item)[1] for item in lefmacros)
+                            notlefnames = []
+                            for exclude in spice_exclude:
+                                notlefnames.extend(fnmatch.filter(lefnames, exclude))
+
+                            # Apply exclude list
+                            if len(notlefnames) > 0:
+                                for file in lefmacros[:]:
+                                    if os.path.split(file)[1] in notlefnames:
+                                        lefmacros.remove(file)
+
+                    if not lefmacros:
+                        print('No source for abstract views:  Abstract views not made.')
+                    elif not have_lef:
+                        # This library has a GDS database but no LEF database.  Use
+                        # magic to create abstract views of the GDS cells.  If
+                        # option "annotate" is given, then read the LEF file after
+                        # loading the database file to annotate the cell with
+                        # information from the LEF file.  This usually indicates
+                        # that the LEF file has some weird definition of obstruction
+                        # layers and we want to normalize them by using magic's LEF
+                        # write procedure, but we still need the pin use and class
+                        # information from the LEF file, and maybe the bounding box.
+
+                        for leffile in leffiles:
+                            if have_lefanno:
+                                print('lef read ' + lefsrclibdir + '/' + leffile, file=ofile)
+                        for lefmacro in lefmacros:
+                            print('if {[cellname list exists ' + lefmacro + '] != 0} {', file=ofile)
+                            print('   load ' + lefmacro, file=ofile)
+                            print('   lef write ' + lefmacro + ' -hide', file=ofile)
+                            print('}', file=ofile)
+                    print('puts stdout "Done."', file=ofile)
+                    print('quit -noprompt', file=ofile)
+
+                print('Running magic to create magic database files.')
+                sys.stdout.flush()
+
+                # Run magic to read in the GDS file and write out magic databases.
+                with open(destlibdir + '/generate_magic.tcl', 'r') as ifile:
+                    mproc = subprocess.run(['magic', '-dnull', '-noconsole'],
+				stdin = ifile, stdout = subprocess.PIPE,
+				stderr = subprocess.PIPE, cwd = destlibdir,
+				universal_newlines = True)
+                    if mproc.stdout:
+                        for line in mproc.stdout.splitlines():
+                            print(line)
+                    if mproc.stderr:
+                        print('Error message output from magic:')
+                        for line in mproc.stderr.splitlines():
+                            print(line)
+                    if mproc.returncode != 0:
+                        print('ERROR:  Magic exited with status ' + str(mproc.returncode))
+
+                if not have_lef:
+                    print('No LEF file install;  need to generate LEF.')
+                    # Remove the lefanno/ target and its contents.
+                    if have_lefanno:
+                        if ef_format:
+                            lefannosrcdir = targetdir + lef_reflib + 'lefanno'
+                        else:
+                            lefannosrcdir = targetdir + lef_reflib + destlib + '/lefanno'
+                        if os.path.isdir(lefannosrcdir):
+                            shutil.rmtree(lefannosrcdir)
+
+                    if ef_format:
+                        destlefdir = targetdir + lef_reflib + 'lef'
+                        destleflibdir = destlefdir + '/' + destlib
+                    else:
+                        destlefdir = targetdir + lef_reflib + destlib + '/lef'
+                        destleflibdir = destlefdir
+
+                    os.makedirs(destleflibdir, exist_ok=True)
+                    leflist = os.listdir(destlibdir)
+                    leflist = list(item for item in leflist if os.path.splitext(item)[1] == '.lef')
+
+                    # All macros will go into one file
+                    destleflib = destleflibdir + '/' + destlib + '.lef'
+                    # Remove any existing library file from the target directory
+                    if os.path.isfile(destleflib):
+                        print('Removing existing library ' + destleflib)
+                        os.remove(destleflib)
+
+                    first = True
+                    with open(destleflib, 'w') as ofile:
+                        for leffile in leflist:
+                            # Remove any existing single file from the target directory
+                            if os.path.isfile(destleflibdir + '/' + leffile):
+                                print('Removing ' + destleflibdir + '/' + leffile)
+                                os.remove(destleflibdir + '/' + leffile)
+
+                            # Append contents
+                            sourcelef =  destlibdir + '/' + leffile
+                            with open(sourcelef, 'r') as ifile:
+                                leflines = ifile.read().splitlines()
+                                if not first:
+                                    # Remove header from all but the first file
+                                    leflines = leflines[8:]
+                                else:
+                                    first = False
+
+                            for line in leflines:
+                                print(line, file=ofile)
+
+                            # Remove file from the source directory
+                            print('Removing source file ' + sourcelef)
+                            os.remove(sourcelef)
+
+                    # Set have_lef now that LEF files were made, so they
+                    # can be used to generate the maglef/ databases.
+                    have_lef = True
+
+            elif not have_mag_8_2:
+                print('The installer is not able to run magic.')
+            else:
+                print("Master PDK magic startup file not found.  Did you install")
+                print("PDK tech files before PDK vendor files?")
+
+    if have_lef and not no_lef_convert:
+        print("Migrating LEF files to layout.")
+        if ef_format:
+            destdir = targetdir + '/libs.ref/' + 'maglef'
+            srcdir = targetdir + lef_reflib + 'lef'
+            magdir = targetdir + gds_reflib + 'mag'
+            cdldir = targetdir + cdl_reflib + 'cdl'
+            os.makedirs(destdir, exist_ok=True)
+
+        # For each library, create the library subdirectory
+        for library in libraries:
+            if len(library) == 3:
+                destlib = library[2]
+            else:
+                destlib = library[1]
+
+            if ef_format:
+                destlibdir = destdir + '/' + destlib
+                srclibdir = srcdir + '/' + destlib
+                maglibdir = magdir + '/' + destlib
+                cdllibdir = cdldir + '/' + destlib
+            else:
+                destdir = targetdir + '/libs.ref/' + destlib + '/maglef'
+                srcdir = targetdir + lef_reflib + destlib + '/lef'
+                magdir = targetdir + gds_reflib + destlib + '/mag'
+                cdldir = targetdir + cdl_reflib + destlib + '/cdl'
+
+                destlibdir = destdir
+                srclibdir = srcdir
+                maglibdir = magdir
+                cdllibdir = cdldir
+
+            os.makedirs(destlibdir, exist_ok=True)
+
+            # Link to the PDK magic startup file from the target directory
+            startup_script = targetdir + mag_current + pdkname + '-F.magicrc'
+            if not os.path.isfile(startup_script):
+                startup_script = targetdir + mag_current + pdkname + '.magicrc'
+
+            if have_mag_8_2 and os.path.isfile(startup_script):
+                # If the symbolic link exists, remove it.
+                if os.path.isfile(destlibdir + '/.magicrc'):
+                    os.remove(destlibdir + '/.magicrc')
+                os.symlink(startup_script, destlibdir + '/.magicrc')
+ 
+                # Find LEF file names in the source
+                leffiles = os.listdir(srclibdir)
+                leffiles = list(item for item in leffiles if os.path.splitext(item)[1].lower() == '.lef')
+
+                # Get list of abstract views to make from LEF macros
+                lefmacros = []
+                err_no_macros = False
+                for leffile in leffiles:
+                    with open(srclibdir + '/' + leffile, 'r') as ifile:
+                        ltext = ifile.read()
+                        llines = ltext.splitlines()
+                        for lline in llines:
+                            ltok = re.split(' |\t|\(', lline)
+                            if ltok[0] == 'MACRO':
+                                lefmacros.append(ltok[1])
+
+                # Create exclude list with glob-style matching using fnmatch
+                if len(lefmacros) > 0:
+                    lefnames = list(os.path.split(item)[1] for item in lefmacros)
+                    notlefnames = []
+                    for exclude in lef_exclude:
+                        notlefnames.extend(fnmatch.filter(lefnames, exclude))
+
+                    # Apply exclude list
+                    if len(notlefnames) > 0:
+                        for file in lefmacros[:]:
+                            if os.path.split(file)[1] in notlefnames:
+                                lefmacros.remove(file)
+
+                if len(leffiles) == 0:
+                    print('Warning:  No LEF files found in ' + srclibdir)
+                    continue
+
+                print('Generating conversion script to create magic databases from LEF')
+
+                # Generate a script called "generate_magic.tcl" and leave it in
+                # the target directory.  Use it as input to magic to create the
+                # .mag files from the database.
+
+                with open(destlibdir + '/generate_magic.tcl', 'w') as ofile:
+                    print('#!/usr/bin/env wish', file=ofile)
+                    print('#--------------------------------------------', file=ofile)
+                    print('# Script to generate .mag files from .lef    ', file=ofile)
+                    print('#--------------------------------------------', file=ofile)
+                    print('tech unlock *', file=ofile)
+
+                    # If there are devices in the LEF file that come from the
+                    # PDK library, then copy this list into the script.
+
+                    if pdklibrary:
+                        shortdevlist = []
+                        for macro in lefmacros:
+                            if macro in devlist:
+                                shortdevlist.append(macro)
+
+                        tcldevlist = '{' + ' '.join(shortdevlist) + '}'
+                        print('set devlist ' + tcldevlist, file=ofile)
+
+                    for leffile in leffiles:
+                        print('lef read ' + srclibdir + '/' + leffile, file=ofile)
+
+                    for lefmacro in lefmacros:
+
+                        # To be completed:  Parse SPICE file for port order, make
+                        # sure ports are present and ordered.
+
+                        if pdklibrary and lefmacro in shortdevlist:
+                            print('set cellname ' + lefmacro, file=ofile)
+                            print('if {[lsearch $devlist $cellname] >= 0} {',
+					file=ofile)
+                            print('    load $cellname', file=ofile)
+                            print('    property gencell $cellname', file=ofile)
+                            print('    property parameter m=1', file=ofile)
+                            print('    property library ' + pdklibrary, file=ofile)
+                            print('}', file=ofile)
+
+                    # Load one of the LEF files so that the default (UNNAMED) cell
+                    # is not loaded, then delete (UNNAMED) so it doesn't generate
+                    # an error message.
+                    if len(lefmacros) > 0:
+                        print('load ' + lefmacros[0], file=ofile)
+                        print('cellname delete \(UNNAMED\)', file=ofile)
+                    else:
+                        err_no_macros = True
+                    print('writeall force', file=ofile)
+                    print('puts stdout "Done."', file=ofile)
+                    print('quit -noprompt', file=ofile)
+
+                if err_no_macros == True:
+                    print('Warning:  No LEF macros were defined.')
+
+                print('Running magic to create magic databases from LEF')
+                sys.stdout.flush()
+
+                # Run magic to read in the LEF file and write out magic databases.
+                with open(destlibdir + '/generate_magic.tcl', 'r') as ifile:
+                    mproc = subprocess.run(['magic', '-dnull', '-noconsole'],
+				stdin = ifile, stdout = subprocess.PIPE,
+				stderr = subprocess.PIPE, cwd = destlibdir,
+				universal_newlines = True)
+                    if mproc.stdout:
+                        for line in mproc.stdout.splitlines():
+                            print(line)
+                    if mproc.stderr:
+                        print('Error message output from magic:')
+                        for line in mproc.stderr.splitlines():
+                            print(line)
+                    if mproc.returncode != 0:
+                        print('ERROR:  Magic exited with status ' + str(mproc.returncode))
+
+
+                # Now list all the .mag files generated, and for each, read the
+                # corresponding file from the mag/ directory, pull the GDS file
+                # properties, and add those properties to the maglef view.  Also
+                # read the CDL (or SPICE) netlist, read the ports, and rewrite
+                # the port order in the mag and maglef file accordingly.
+
+                # Diagnostic
+                print('Annotating files in ' + destlibdir)
+                sys.stdout.flush()
+                magfiles = os.listdir(destlibdir)
+                magfiles = list(item for item in magfiles if os.path.splitext(item)[1] == '.mag')
+                for magroot in magfiles:
+                    magname = os.path.splitext(magroot)[0]
+                    magfile = maglibdir + '/' + magroot
+                    magleffile = destlibdir + '/' + magroot
+                    prop_lines = get_gds_properties(magfile)
+
+                    # Make sure properties include the Tcl generated cell
+                    # information from the PDK script
+
+                    prop_gencell = []
+                    if pdklibrary:
+                        if magname in fixedlist:
+                            prop_gencell.append('gencell ' + magname)
+                            prop_gencell.append('library ' + pdklibrary)
+                            prop_gencell.append('parameter m=1')
+
+                    nprops = len(prop_lines) + len(prop_gencell)
+
+                    cdlfile = cdllibdir + '/' + magname + '.cdl'
+                    if os.path.exists(cdlfile):
+                        cdlfiles = [cdlfile]
+                    else:
+                        # Assume there is at least one file with all cell subcircuits
+                        # in it.
+                        try:
+                            cdlfiles = glob.glob(cdllibdir + '/*.cdl')
+                        except:
+                            pass
+                    if len(cdlfiles) > 0:
+                        for cdlfile in cdlfiles:
+                            port_dict = get_subckt_ports(cdlfile, magname)
+                            if port_dict != {}:
+                                break
+                    else:
+                        port_dict = {}
+
+                    if port_dict == {}:
+                        print('No CDL file contains ' + destlib + ' device ' + magname)
+                        cdlfile = None
+                        # To be done:  If destlib is 'primitive', then look in
+                        # SPICE models for port order.
+                        if destlib == 'primitive':
+                            print('Fix me:  Need to look in SPICE models!')
+
+                    proprex = re.compile('<< properties >>')
+                    endrex = re.compile('<< end >>')
+                    rlabrex = re.compile('rlabel[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+([^ \t]+)')
+                    flabrex = re.compile('flabel[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+([^ \t]+)')
+                    portrex = re.compile('port[ \t]+([^ \t]+)[ \t]+(.*)')
+                    gcellrex = re.compile('string gencell')
+                    portnum = -1
+
+                    with open(magleffile, 'r') as ifile:
+                        magtext = ifile.read().splitlines()
+
+                    with open(magleffile, 'w') as ofile:
+                        has_props = False
+                        is_gencell = False
+                        for line in magtext:
+                            tmatch = portrex.match(line)
+                            if tmatch:
+                                if portnum >= 0:
+                                    line = 'port ' + str(portnum) + ' ' + tmatch.group(2)
+                                else:
+                                    line = 'port ' + tmatch.group(1) + ' ' + tmatch.group(2)
+                            ematch = endrex.match(line)
+                            if ematch and nprops > 0:
+                                if not has_props:
+                                    print('<< properties >>', file=ofile)
+                                if not is_gencell:
+                                    for prop in prop_gencell:
+                                        print('string ' + prop, file=ofile)
+                                for prop in prop_lines:
+                                    print('string ' + prop, file=ofile)
+
+                            print(line, file=ofile)
+                            pmatch = proprex.match(line)
+                            if pmatch:
+                                has_props = True
+
+                            gmatch = gcellrex.match(line)
+                            if gmatch:
+                                is_gencell = True
+
+                            lmatch = flabrex.match(line)
+                            if not lmatch:
+                                lmatch = rlabrex.match(line)
+                            if lmatch:
+                                labname = lmatch.group(1).lower()
+                                try:
+                                    portnum = port_dict[labname]
+                                except:
+                                    portnum = -1
+
+                    if os.path.exists(magfile):
+                        with open(magfile, 'r') as ifile:
+                            magtext = ifile.read().splitlines()
+
+                        with open(magfile, 'w') as ofile:
+                            for line in magtext:
+                                tmatch = portrex.match(line)
+                                if tmatch:
+                                    if portnum >= 0:
+                                        line = 'port ' + str(portnum) + ' ' + tmatch.group(2)
+                                    else:
+                                        line = 'port ' + tmatch.group(1) + ' ' + tmatch.group(2)
+                                ematch = endrex.match(line)
+                                print(line, file=ofile)
+                                lmatch = flabrex.match(line)
+                                if not lmatch:
+                                    lmatch = rlabrex.match(line)
+                                if lmatch:
+                                    labname = lmatch.group(1).lower()
+                                    try:
+                                        portnum = port_dict[labname]
+                                    except:
+                                        portnum = -1
+                    elif os.path.splitext(magfile)[1] == '.mag':
+                        # NOTE:  Possibly this means the GDS cell has a different name.
+                        print('Error: No file ' + magfile + '.  Why is it in maglef???')
+
+            elif not have_mag_8_2:
+                print('The installer is not able to run magic.')
+            else:
+                print("Master PDK magic startup file not found.  Did you install")
+                print("PDK tech files before PDK vendor files?")
+
+    # If SPICE or CDL databases were specified, then convert them to
+    # a form that can be used by ngspice, using the cdl2spi.py script 
+
+    if have_spice:
+        if ef_format:
+            if not os.path.isdir(targetdir + cdl_reflib + 'spi'):
+                os.makedirs(targetdir + cdl_reflib + 'spi', exist_ok=True)
+
+    elif have_cdl and not no_cdl_convert:
+        if ef_format:
+            if not os.path.isdir(targetdir + cdl_reflib + 'spi'):
+                os.makedirs(targetdir + cdl_reflib + 'spi', exist_ok=True)
+
+        print("Migrating CDL netlists to SPICE.")
+        sys.stdout.flush()
+
+        if ef_format:
+            destdir = targetdir + cdl_reflib + 'spi'
+            srcdir = targetdir + cdl_reflib + 'cdl'
+            os.makedirs(destdir, exist_ok=True)
+
+        # For each library, create the library subdirectory
+        for library in libraries:
+            if len(library) == 3:
+                destlib = library[2]
+            else:
+                destlib = library[1]
+
+            if ef_format:
+                destlibdir = destdir + '/' + destlib
+                srclibdir = srcdir + '/' + destlib
+            else:
+                destdir = targetdir + cdl_reflib + destlib + '/spice'
+                srcdir = targetdir + cdl_reflib + destlib + '/cdl'
+
+                destlibdir = destdir
+                srclibdir = srcdir
+
+            os.makedirs(destlibdir, exist_ok=True)
+
+            # Find CDL file names in the source
+            # If CDL is marked compile-only then ONLY convert <distdir>.cdl
+            if cdl_compile_only:
+                alllibname = destlibdir + '/' + destlib + '.cdl'
+                if not os.path.exists(alllibname):
+                    cdl_compile_only = False
+                else:
+                    cdlfiles = [alllibname]
+
+            if not cdl_compile_only:
+                cdlfiles = os.listdir(srclibdir)
+                cdlfiles = list(item for item in cdlfiles if os.path.splitext(item)[1].lower() == '.cdl')
+
+            # The directory with scripts should be in ../common with respect
+            # to the Makefile that determines the cwd.
+            scriptdir = os.path.split(os.getcwd())[0] + '/common'
+
+            # Run cdl2spi.py script to read in the CDL file and write out SPICE
+            for cdlfile in cdlfiles:
+                if ef_format:
+                    spiname = os.path.splitext(cdlfile)[0] + '.spi'
+                else:
+                    spiname = os.path.splitext(cdlfile)[0] + '.spice'
+                procopts = [scriptdir + '/cdl2spi.py', srclibdir + '/' + cdlfile, destlibdir + '/' + spiname]
+                if do_cdl_scaleu:
+                    procopts.append('-dscale=u')
+                for item in ignorelist:
+                    procopts.append('-ignore=' + item)
+
+                print('Running (in ' + destlibdir + '): ' + ' '.join(procopts))
+                pproc = subprocess.run(procopts,
+			stdin = subprocess.DEVNULL, stdout = subprocess.PIPE,
+			stderr = subprocess.PIPE, cwd = destlibdir,
+			universal_newlines = True)
+                if pproc.stdout:
+                    for line in pproc.stdout.splitlines():
+                        print(line)
+                if pproc.stderr:
+                    print('Error message output from cdl2spi.py:')
+                    for line in pproc.stderr.splitlines():
+                        print(line)
+
+    elif have_gds and not no_gds_convert:
+        # If neither SPICE nor CDL formats is available in the source, then
+        # read GDS;  if the result has no ports, then read the corresponding
+        # LEF library to get port information.  Then write out a SPICE netlist
+        # for the whole library.  NOTE:  If there is no CDL or SPICE source,
+        # then the port numbering is arbitrary, and becomes whatever the
+        # output of this script makes it.
+
+        if ef_format:
+            destdir = targetdir + cdl_reflib + 'spi'
+            srcdir = targetdir + gds_reflib + 'gds'
+            lefdir = targetdir + lef_reflib + 'lef'
+            os.makedirs(destdir, exist_ok=True)
+
+        # For each library, create the library subdirectory
+        for library in libraries:
+            if len(library) == 3:
+                destlib = library[2]
+            else:
+                destlib = library[1]
+
+            if ef_format:
+                destlibdir = destdir + '/' + destlib
+                srclibdir = srcdir + '/' + destlib
+                leflibdir = lefdir + '/' + destlib
+            else:
+                destdir = targetdir + cdl_reflib + destlib + '/spice'
+                srcdir = targetdir + gds_reflib + destlib + '/gds'
+                lefdir = targetdir + lef_reflib + destlib + '/lef'
+
+                destlibdir = destdir
+                srclibdir = srcdir
+                leflibdir = lefdir
+
+            os.makedirs(destlibdir, exist_ok=True)
+
+            # Link to the PDK magic startup file from the target directory
+            startup_script = targetdir + mag_current + pdkname + '-F.magicrc'
+            if not os.path.isfile(startup_script):
+                startup_script = targetdir + mag_current + pdkname + '.magicrc'
+            if os.path.isfile(startup_script):
+                # If the symbolic link exists, remove it.
+                if os.path.isfile(destlibdir + '/.magicrc'):
+                    os.remove(destlibdir + '/.magicrc')
+                os.symlink(startup_script, destlibdir + '/.magicrc')
+
+            # Get the consolidated GDS library file, or a list of all GDS files
+            # if there is no single consolidated library
+
+            allgdslibname = srclibdir + '/' + destlib + '.gds'
+            if not os.path.isfile(allgdslibname):
+                glist = glob.glob(srclibdir + '/*.gds')
+                glist.extend(glob.glob(srclibdir + '/*.gdsii'))
+                glist.extend(glob.glob(srclibdir + '/*.gds2'))
+
+            allleflibname = leflibdir + '/' + destlib + '.lef'
+            if not os.path.isfile(allleflibname):
+                llist = glob.glob(leflibdir + '/*.lef')
+
+            print('Creating magic generation script to generate SPICE library.') 
+            with open(destlibdir + '/generate_magic.tcl', 'w') as ofile:
+                print('#!/usr/bin/env wish', file=ofile)
+                print('#---------------------------------------------', file=ofile)
+                print('# Script to generate SPICE library from GDS   ', file=ofile)
+                print('#---------------------------------------------', file=ofile)
+                print('drc off', file=ofile)
+                print('gds readonly true', file=ofile)
+                print('gds flatten true', file=ofile)
+                print('gds rescale false', file=ofile)
+                print('tech unlock *', file=ofile)
+
+                if not os.path.isfile(allgdslibname):
+                    for gdsfile in glist:
+                        print('gds read ' + gdsfile, file=ofile)
+                else:
+                    print('gds read ' + allgdslibname, file=ofile)
+
+                if not os.path.isfile(allleflibname):
+                    # Annotate the cells with information from the LEF files
+                    for leffile in llist:
+                        print('lef read ' + leffile, file=ofile)
+                else:
+                    print('lef read ' + allleflibname, file=ofile)
+
+                # Load first file and remove the (UNNAMED) cell
+                if not os.path.isfile(allgdslibname):
+                    print('load ' + os.path.splitext(glist[0])[0], file=ofile)
+                else:
+                    gdslibroot = os.path.split(allgdslibname)[1]
+                    print('load ' + os.path.splitext(gdslibroot)[0], file=ofile)
+                print('cellname delete \(UNNAMED\)', file=ofile)
+
+                print('ext2spice lvs', file=ofile)
+
+                # NOTE:  Leaving "subcircuit top" as "auto" (default) can cause
+                # cells like decap that have no I/O to be output without a subcircuit
+                # wrapper.  Also note that if this happens, it is an indication that
+                # power supplies have not been labeled as ports, which is harder to
+                # handle and should be fixed in the source.
+                print('ext2spice subcircuit top on', file=ofile)
+
+                print('ext2spice cthresh 0.1', file=ofile)
+
+                if os.path.isfile(allgdslibname):
+                    print('select top cell', file=ofile)
+                    print('set glist [cellname list children]', file=ofile)
+                    print('foreach cell $glist {', file=ofile)
+                else:
+                    print('foreach cell [cellname list top] {', file=ofile)
+
+                print('    load $cell', file=ofile)
+                print('    puts stdout "Extracting cell $cell"', file=ofile)
+                print('    extract all', file=ofile)
+                print('    ext2spice', file=ofile)
+                print('}', file=ofile)
+                print('puts stdout "Done."', file=ofile)
+                print('quit -noprompt', file=ofile)
+
+            # Run magic to read in the individual GDS files and
+            # write out the consolidated GDS library
+
+            print('Running magic to create GDS library.')
+            sys.stdout.flush()
+
+            mproc = subprocess.run(['magic', '-dnull', '-noconsole',
+				destlibdir + '/generate_magic.tcl'],
+				stdin = subprocess.DEVNULL,
+				stdout = subprocess.PIPE,
+				stderr = subprocess.PIPE, cwd = destlibdir,
+				universal_newlines = True)
+            if mproc.stdout:
+                for line in mproc.stdout.splitlines():
+                    print(line)
+            if mproc.stderr:
+                print('Error message output from magic:')
+                for line in mproc.stderr.splitlines():
+                    print(line)
+            if mproc.returncode != 0:
+                print('ERROR:  Magic exited with status ' + str(mproc.returncode))
+
+            # Remove intermediate extraction files
+            extfiles = glob.glob(destlibdir + '/*.ext')
+            for extfile in extfiles:
+                os.remove(extfile)
+
+            # If the GDS file was a consolidated file of all cells, then
+            # create a similar SPICE library of all cells.
+
+            if os.path.isfile(allgdslibname):
+                spiext = '.spice' if not ef_format else '.spi'
+                create_spice_library(destlibdir, destlib, spiext, do_compile_only, do_stub, excludelist)
+
+    sys.exit(0)
diff --git a/common/insert_property.py b/common/insert_property.py
new file mode 100755
index 0000000..1ee5687
--- /dev/null
+++ b/common/insert_property.py
@@ -0,0 +1,133 @@
+#!/usr/bin/env python3
+#
+# insert_property.py:  For the given install path, library name, and cellname,
+# find the Magic layout of the cell, and add the specified property string.
+# If the property exists and is the same as specified, then it remains the
+# same.  If the property exists but has a different value, it is replaced.
+# The property is added to the layout in both the mag/ (full) and maglef/
+# (abstract) directories.  Option "-maglef" or "-mag" will restrict the
+# use to only the view indicated by the option.
+# 
+# e.g.:
+#
+# insert_property.py /home/tim/projects/efabless/tech/SkyWater/EFS8A \
+#	s8iom0 s8iom0s8_top_gpio "FIXED_BBOX 0 607 15000 40200"
+
+import os
+import re
+import sys
+
+def addprop(filename, propstring, noupdate):
+    with open(filename, 'r') as ifile:
+        magtext = ifile.read().splitlines() 
+
+    propname = propstring.split()[0]
+    proprex = re.compile('<< properties >>')
+    endrex = re.compile('<< end >>')
+
+    in_props = False
+    printed = False
+    done = False
+
+    with open(filename, 'w') as ofile:
+        for line in magtext:
+            pmatch = proprex.match(line)
+            if pmatch:
+                in_props = True
+            elif in_props:
+                linetok = line.split()
+                if linetok[0] == 'string':
+                    testname = linetok[1]
+                    testval = linetok[2]
+                    if testname == propname:
+                        if noupdate == False:
+                            print('string ' + propstring, file=ofile)
+                            printed = True
+                        done = True
+
+            ematch = endrex.match(line)
+            if ematch:
+                if in_props == False:
+                    print('<< properties >>', file=ofile)
+                if done == False:
+                    print('string ' + propstring, file=ofile)
+
+            if not printed:
+                print(line, file=ofile)
+            printed = False
+
+def usage():
+    print("insert_property.py <path_to_pdk> <libname> <cellname> <prop_string> [option]")
+    print("  options:")
+    print("   -mag      do only for the view in the mag/ directory")
+    print("   -maglef   do only for the view in the maglef/ directory")
+    print("   -noupdate do not replace the property if it already exists in the file")
+    return 0
+
+if __name__ == '__main__':
+
+    options = []
+    arguments = []
+    for item in sys.argv[1:]:
+        if item.find('-', 0) == 0:
+            options.append(item)
+        else:
+            arguments.append(item)
+
+    if len(arguments) < 4:
+        print("Not enough options given to insert_property.py.")
+        usage()
+        sys.exit(0)
+
+    source = arguments[0]
+    libname = arguments[1]
+    cellname = arguments[2]
+    propstring = arguments[3]
+
+    noupdate = True if '-noupdate' in options else False
+    fail = 0
+
+    efformat = True if '-ef_format' in options else False
+
+    domag = True
+    domaglef = True
+    if '-mag' in options and '-maglef' not in options:
+        domaglef = False
+    if '-maglef' in options and '-mag' not in options:
+        domag = False
+
+    if domag:
+        if efformat:
+            filename = source + '/libs.ref/mag/' + libname + '/' + cellname + '.mag'
+        else:
+            filename = source + '/libs.ref/' + libname + '/mag/' + cellname + '.mag'
+
+        if os.path.isfile(filename):
+            addprop(filename, propstring, noupdate)
+        else:
+            fail += 1
+    else:
+        fail += 1
+
+    if domaglef:
+        if efformat:
+            filename = source + '/libs.ref/maglef/' + libname + '/' + cellname + '.mag'
+        else:
+            filename = source + '/libs.ref/' + libname + '/maglef/' + cellname + '.mag'
+
+        if os.path.isfile(filename):
+            addprop(filename, propstring, noupdate)
+        else:
+            fail += 1
+    else:
+        fail += 1
+
+    if fail == 2:
+        print('Error:  No layout file in either mag/ or maglef/', file=sys.stderr)
+        if efformat:
+            print('(' + source + '/libs.ref/mag[lef]/' + libname +
+		    '/' + cellname + '.mag)', file=sys.stderr)
+        else:
+            print('(' + source + '/libs.ref/' + libname + '/mag[lef]/'
+		    + cellname + '.mag)', file=sys.stderr)
+
diff --git a/common/makestub.py b/common/makestub.py
new file mode 100755
index 0000000..a5334d7
--- /dev/null
+++ b/common/makestub.py
@@ -0,0 +1,122 @@
+#!/usr/bin/env python3
+#
+#-------------------------------------------------------------------
+#  makestub.py
+#
+# Read a CDL or SPICE netlist and remove all contents from subcircuits,
+# leaving only the .SUBCKT ... .ENDS wrapper.  Used as a filter, so it
+# replaces the original file with the modified one.  If the original
+# file is a symbolic link, then it is first unlinked and replaced with
+# the new contents.
+#
+# Use:
+#
+# 	makestub.py <path_to_netlist_file>
+#
+#-------------------------------------------------------------------
+
+import os
+import re
+import sys
+import stat
+import textwrap
+
+def makeuserwritable(filepath):
+    if os.path.exists(filepath):
+        st = os.stat(filepath)
+        os.chmod(filepath, st.st_mode | stat.S_IWUSR)
+
+def generate_stubs(netlist_path, output_path):
+    netlist_dir = os.path.split(netlist_path)[0]
+    netlist_filename = os.path.split(netlist_path)[1]
+    netlist_root = os.path.splitext(netlist_filename)[0]
+    netlist_ext = os.path.splitext(netlist_filename)[1]
+
+    if not os.path.exists(netlist_path):
+        print('Error:  Specified file "' + netlist_path + '" does not exist!')
+        return
+
+    if output_path == None:
+        output_path = netlist_path
+
+    with open(netlist_path, 'r') as ifile:
+        spicetext = ifile.read().splitlines()
+
+    # Remove blank lines and comment lines
+    spicelines = []
+    for line in spicetext:
+        if len(line) > 0:
+            if line[0] != '*':
+                spicelines.append(line)
+
+    # Remove line extensions
+    spicetext = '\n'.join(spicelines)
+    spicelines = spicetext.replace('\n+', ' ').splitlines()
+
+    # SPICE subcircuit definition:
+    subcrex = re.compile(r'[ \t]*\.subckt[ \t]+([^ \t]+)[ \t]+(.*)$', re.IGNORECASE)
+    endsrex = re.compile(r'[ \t]*\.ends[ \t]*', re.IGNORECASE)
+
+    spiceoutlines = []
+
+    insub = False
+    for line in spicelines:
+        if insub:
+            ematch = endsrex.match(line)
+            if ematch:
+                insub = False
+                spiceoutlines.append(line)
+        else:
+            smatch = subcrex.match(line)
+            if smatch:
+                insub = True
+                spiceoutlines.append('')
+                spiceoutlines.append('*----------------------------------------------')
+                spiceoutlines.append('* SPICE stub entry for ' + smatch.group(1) + '.')
+                spiceoutlines.append('*----------------------------------------------')
+                spiceoutlines.append('')
+            spiceoutlines.append(line)
+
+    if output_path == netlist_path:
+        if os.path.islink(netlist_path):
+            os.unlink(netlist_path)
+
+    # Re-wrap continuation lines at 100 characters
+    wrappedlines = []
+    for line in spiceoutlines:
+        wrappedlines.append('\n+ '.join(textwrap.wrap(line, 100)))
+
+    # Just in case the file in the source repo is not user-writable
+    if os.path.exists(output_path):
+        makeuserwritable(output_path)
+
+    with open(output_path, 'w') as ofile:
+        for line in wrappedlines:
+            print(line, file=ofile)
+
+# If called as main, run generate_stubs
+
+if __name__ == '__main__':
+
+    # Divide up command line into options and arguments
+    options = []
+    arguments = []
+    for item in sys.argv[1:]:
+        if item.find('-', 0) == 0:
+            options.append(item)
+        else:
+            arguments.append(item)
+
+    # Need one argument:  path to CDL or SPICE netlist
+    # If two arguments, then 2nd argument is the output file.
+
+    if len(arguments) == 2:
+        netlist_path = arguments[0]
+        output_path = arguments[1]
+        generate_stubs(netlist_path, output_path)
+    elif len(arguments) != 1:
+        print("Usage:  makestub.py <file_path> [<output_path>]")
+    elif len(arguments) == 1:
+        netlist_path = arguments[0]
+        generate_stubs(netlist_path, None)
+
diff --git a/common/orig/foundry_install.py b/common/orig/foundry_install.py
new file mode 100755
index 0000000..96caf14
--- /dev/null
+++ b/common/orig/foundry_install.py
@@ -0,0 +1,1175 @@
+#!/usr/bin/env python3
+#
+# foundry_install.py
+#
+# This file generates the local directory structure and populates the
+# directories with foundry vendor data.
+#
+# Options:
+#    -link_from <type>	Make symbolic links to vendor files from target
+#			Types are: "none", "source", or a PDK name.
+#			Default "none" (copy all files from source)
+#    -ef_names		Use efabless naming (libs.ref/techLEF),
+#			otherwise use generic naming (libs.tech/lef)
+#
+#    -source <path>	Path to source data top level directory
+#    -target <path>	Path to target top level directory
+#
+#
+# All other options represent paths to vendor files.  They may all be
+# wildcarded with "*" to represent, e.g., version number directories,
+# or names of supported libraries.  Where wildcards exist, if there is
+# more than one directory in the path, the value represented by "*"
+# will first be checked against library names.  If no library name is
+# found, then the wildcard value will be assumed to be numeric and
+# separated by either "." or "_" to represent major/minor/sub/...
+# revision numbers (alphanumeric).
+#
+# Note only one of "-spice" or "-cdl" need be specified.  Since the
+# open source tools use ngspice, CDL files are converted to ngspice
+# syntax when needed.
+#
+#	-techlef <path>	Path to technology LEF file
+#	-doc <path>	Path to technology documentation
+#	-lef <path>	Path to LEF file
+#	-lefanno <path>	Path to LEF file (for annotation only)
+#	-spice <path>	Path to SPICE netlists
+#	-cdl <path>	Path to CDL netlists
+#	-models <path>	Path to SPICE (primitive device) models
+#	-liberty <path>	Path to Liberty timing files
+#	-gds <path>	Path to GDS data
+#	-verilog <path>	Path to verilog models
+#
+#	-library <type> <name> [<target>]	See below
+#
+# For the "-library" option, any number of libraries may be supported, and
+# one "-library" option should be provided for each supported library.
+# <type> is one of:  "digital", "primitive", or "general".  Analog and I/O
+# libraries fall under the category "general", as they are all treated the
+# same way.  <name> is the vendor name of the library.  [<target>] is the
+# (optional) local name of the library.  If omitted, then the vendor name
+# is used for the target (there is no particular reason to specify a
+# different local name for a library).
+#
+# All options "-lef", "-spice", etc., can take the additional arguments
+# 	up  <number>
+#
+# to indicate that the source hierarchy should be copied from <number>
+# levels above the files.  For example, if liberty files are kept in
+# multiple directories according to voltage level, then
+#
+# 	-liberty x/y/z/PVT_*/*.lib
+#
+# would install all .lib files directly into libs.ref/lef/<libname>/*.lib
+# while
+#
+# 	-liberty x/y/z/PVT_*/*.lib up 1
+#
+# would install all .lib files into libs.ref/lef/PVT_*/<libname>/*.lib
+#
+# Other library-specific arguments are:
+#
+#	nospec	:  Remove timing specification before installing
+#		    (used with verilog files;  needs to be extended to
+#		    liberty files)
+#	compile :  Create a single library from all components.  Used
+#		    when a foundry library has inconveniently split
+#		    an IP library (LEF, CDL, verilog, etc.) into
+#		    individual files.
+#
+# NOTE:  This script can be called once for all libraries if all file
+# types (gds, cdl, lef, etc.) happen to all work with the same wildcards.
+# However, it is more likely that it will be called several times for the
+# same PDK, once to install I/O cells, once to install digital, and so
+# forth, as made possible by the wild-carding.
+
+import re
+import os
+import sys
+import glob
+import shutil
+import subprocess
+
+def usage():
+    print("foundry_install.py [options...]")
+    print("   -link_from <name> Make symbolic links from target to <name>")
+    print("                     where <name> can be 'source' or a PDK name.")
+    print("                     Default behavior is to copy all files.")
+    print("   -copy             Copy files from source to target (default)")
+    print("   -ef_names         Use efabless naming conventions for local directories")
+    print("")
+    print("   -source <path>    Path to top of source directory tree")
+    print("   -target <path>    Path to top of target directory tree")
+    print("")
+    print("   -techlef <path>   Path to technology LEF file")
+    print("   -doc <path>       Path to technology documentation")
+    print("   -lef <path>       Path to LEF file")
+    print("   -lefanno <path>   Path to LEF file (for annotation only)")
+    print("   -spice <path>     Path to SPICE netlists")
+    print("   -cdl <path>       Path to CDL netlists")
+    print("   -models <path>    Path to SPICE (primitive device) models")
+    print("   -lib <path>       Path to Liberty timing files")
+    print("   -liberty <path>       Path to Liberty timing files")
+    print("   -gds <path>       Path to GDS data")
+    print("   -verilog <path>   Path to verilog models")
+    print("   -library <type> <name> [<target>]	 See below")
+    print("")
+    print(" All <path> names may be wild-carded with '*' ('glob'-style wild-cards)")
+    print("")
+    print(" All options with <path> other than source and target may take the additional")
+    print(" arguments 'up <number>', where <number> indicates the number of levels of")
+    print(" hierarchy of the source path to include when copying to the target.")
+    print("")
+    print(" Library <type> may be one of:")
+    print("    digital		Digital standard cell library")
+    print("    primitive	Primitive device library")
+    print("    general		All other library types (I/O, analog, etc.)")
+    print("")
+    print(" If <target> is unspecified then <name> is used for the target.")
+
+def get_gds_properties(magfile):
+    proprex = re.compile('^[ \t]*string[ \t]+(GDS_[^ \t]+)[ \t]+([^ \t]+)$')
+    proplines = []
+    if os.path.isfile(magfile):
+        with open(magfile, 'r') as ifile:
+            magtext = ifile.read().splitlines()
+            for line in magtext:
+                lmatch = proprex.match(line)
+                if lmatch:
+                    propline = lmatch.group(1) + ' ' + lmatch.group(2)
+                    proplines.append(propline)
+    return proplines
+
+# Read subcircuit ports from a CDL file, given a subcircuit name that should
+# appear in the file as a subcircuit entry, and return a dictionary of ports
+# and their indexes in the subcircuit line.
+
+def get_subckt_ports(cdlfile, subname):
+    portdict = {}
+    pidx = 1
+    portrex = re.compile('^\.subckt[ \t]+([^ \t]+)[ \t]+(.*)$', re.IGNORECASE)
+    with open(cdlfile, 'r') as ifile:
+        cdltext = ifile.read()
+        cdllines = cdltext.replace('\n+', ' ').splitlines()
+        for line in cdllines:
+            lmatch = portrex.match(line)
+            if lmatch:
+                if lmatch.group(1).lower() == subname.lower():
+                    ports = lmatch.group(2).split()
+                    for port in ports:
+                        portdict[port.lower()] = pidx
+                        pidx += 1
+                    break
+    return portdict
+
+# Filter a verilog file to remove any backslash continuation lines, which
+# iverilog does not parse.  If targetroot is a directory, then find and
+# process all files in the path of targetroot.  If any file to be processed
+# is unmodified (has no backslash continuation lines), then ignore it.  If
+# any file is a symbolic link and gets modified, then remove the symbolic
+# link before overwriting with the modified file.
+#
+# If 'do_remove_spec' is True, then remove timing information from the file,
+# which is everything between the keywords "specify" and "endspecify".
+
+def vfilefilter(vfile, do_remove_spec):
+    modified = False
+    with open(vfile, 'r') as ifile:
+        vtext = ifile.read()
+
+    # Remove backslash-followed-by-newline and absorb initial whitespace.  It
+    # is unclear what initial whitespace means in this context, as the use-
+    # case that has been seen seems to work under the assumption that leading
+    # whitespace is ignored up to the amount used by the last indentation.
+
+    vlines = re.sub('\\\\\n[ \t]*', '', vtext)
+
+    if do_remove_spec:
+        specrex = re.compile('\n[ \t]*specify[ \t\n]+')
+        endspecrex = re.compile('\n[ \t]*endspecify')
+        smatch = specrex.search(vlines)
+        while smatch:
+            specstart = smatch.start()
+            specpos = smatch.end()
+            ematch = endspecrex.search(vlines[specpos:])
+            specend = ematch.end()
+            vtemp = vlines[0:specstart + 1] + vlines[specpos + specend + 1:]
+            vlines = vtemp
+            smatch = specrex.search(vlines)
+
+    if vlines != vtext:
+        # File contents have been modified, so if this file was a symbolic
+        # link, then remove it.  Otherwise, overwrite the file with the
+        # modified contents.
+        if os.path.islink(vfile):
+            os.unlink(vfile)
+        with open(vfile, 'w') as ofile:
+            ofile.write(vlines)
+
+# Run a filter on verilog files that cleans up known syntax issues.
+# This is embedded in the foundry_install script and is not a custom
+# filter largely because the issues are in the tool, not the PDK.
+
+def vfilter(targetroot, do_remove_spec):
+    if os.path.isfile(targetroot):
+        vfilefilter(targetroot, do_remove_spec)
+    else:
+        vlist = glob.glob(targetroot + '/*')
+        for vfile in vlist:
+            if os.path.isfile(vfile):
+                vfilefilter(vfile, do_remove_spec)
+
+# For issues that are PDK-specific, a script can be written and put in
+# the PDK's custom/scripts/ directory, and passed to the foundry_install
+# script using the "filter" option.
+
+def tfilter(targetroot, filterscript):
+    if os.path.isfile(targetroot):
+        print('   Filtering file ' + targetroot)
+        subprocess.run([filterscript, targetroot, targetroot],
+			stdin = subprocess.DEVNULL, stdout = subprocess.PIPE,
+			stderr = subprocess.PIPE, universal_newlines = True)
+    else:
+        tlist = glob.glob(targetroot + '/*')
+        for tfile in tlist:
+            if os.path.isfile(tfile):
+                print('   Filtering file ' + tfile)
+                subprocess.run([filterscript, tfile, tfile],
+			stdin = subprocess.DEVNULL, stdout = subprocess.PIPE,
+			stderr = subprocess.PIPE, universal_newlines = True)
+
+# This is the main entry point for the foundry install script.
+
+if __name__ == '__main__':
+
+    if len(sys.argv) == 1:
+        print("No options given to foundry_install.py.")
+        usage()
+        sys.exit(0)
+    
+    optionlist = []
+    newopt = []
+
+    sourcedir = None
+    targetdir = None
+    link_from = None
+
+    ef_names = False
+
+    have_lef = False
+    have_lefanno = False
+    have_gds = False
+    have_spice = False
+    have_cdl = False
+    ignorelist = []
+
+    do_install = True
+
+    # Break arguments into groups where the first word begins with "-".
+    # All following words not beginning with "-" are appended to the
+    # same list (optionlist).  Then each optionlist is processed.
+    # Note that the first entry in optionlist has the '-' removed.
+
+    for option in sys.argv[1:]:
+        if option.find('-', 0) == 0:
+            if newopt != []:
+                optionlist.append(newopt)
+                newopt = []
+            newopt.append(option[1:])
+        else:
+            newopt.append(option)
+
+    if newopt != []:
+        optionlist.append(newopt)
+
+    # Pull library names from optionlist
+    libraries = []
+    for option in optionlist[:]:
+        if option[0] == 'library':
+            optionlist.remove(option)
+            libraries.append(option[1:]) 
+
+    # Check for option "ef_names" or "std_names"
+    for option in optionlist[:]:
+        if option[0] == 'ef_naming' or option[0] == 'ef_names':
+            optionlist.remove(option)
+            ef_names = True
+        elif option[0] == 'std_naming' or option[0] == 'std_names':
+            optionlist.remove(option)
+            ef_names = False
+        elif option[0] == 'uninstall':
+            optionlist.remove(option)
+            do_install = False
+
+    # Check for options "link_from", "source", and "target"
+    link_name = None
+    for option in optionlist[:]:
+        if option[0] == 'link_from':
+            optionlist.remove(option)
+            if option[1].lower() == 'none':
+                link_from = None
+            elif option[1].lower() == 'source':
+                link_from = 'source'
+            else:
+                link_from = option[1]
+                link_name = os.path.split(link_from)[1]
+        elif option[0] == 'source':
+            optionlist.remove(option)
+            sourcedir = option[1]
+        elif option[0] == 'target':
+            optionlist.remove(option)
+            targetdir = option[1]
+
+    # Error if no source or dest specified
+    if not sourcedir:
+        print("No source directory specified.  Exiting.")
+        sys.exit(1)
+
+    if not targetdir:
+        print("No target directory specified.  Exiting.")
+        sys.exit(1)
+
+    # If link source is a PDK name, if it has no path, then pull the
+    # path from the target name.
+
+    if link_from:
+        if link_from != 'source':
+            if link_from.find('/', 0) < 0:
+                target_root = os.path.split(targetdir)[0]
+                link_from = target_root + '/' + link_from
+                link_name = link_from
+        else:
+            # If linking from source, convert the source path to an
+            # absolute pathname.
+            sourcedir = os.path.abspath(sourcedir)
+
+    # Take the target PDK name from the target path last component
+    pdkname = os.path.split(targetdir)[1]
+
+    # checkdir is the DIST target directory for the PDK pointed
+    # to by link_name.  Files must be found there before creating
+    # symbolic links to the (not yet existing) final install location.
+
+    if link_name:
+        checkdir = os.path.split(targetdir)[0] + '/' + link_name
+    else:
+        checkdir = ''
+
+    # Diagnostic
+    if do_install:
+        print("Installing in target directory " + targetdir)
+
+    # Create the top-level directories
+
+    os.makedirs(targetdir, exist_ok=True)
+    os.makedirs(targetdir + '/libs.ref', exist_ok=True)
+    os.makedirs(targetdir + '/libs.tech', exist_ok=True)
+
+    # Path to magic techfile depends on ef_names
+
+    if ef_names == True:
+        mag_current = '/libs.tech/magic/current/'
+    else:
+        mag_current = '/libs.tech/magic/'
+
+    # Populate the techLEF and SPICE models, if specified.
+
+    for option in optionlist[:]:
+        if option[0] == 'techlef':
+            filter_script = None
+            for item in option:
+                if item.split('=')[0] == 'filter':
+                    filter_script = item.split('=')[1]
+                    break
+
+            if ef_names == True:
+                techlefdir = targetdir + '/libs.ref/techLEF'
+                checklefdir = checkdir + '/libs.ref/techLEF'
+                if link_from:
+                    linklefdir = link_from + '/libs.ref/techLEF'
+                else:
+                    linklefdir = ''
+            else:
+                techlefdir = targetdir + '/libs.tech/lef'
+                checklefdir = checkdir + '/libs.tech/lef'
+                if link_from:
+                    linklefdir = link_from + '/libs.tech/lef'
+                else:
+                    linklefdir = ''
+            os.makedirs(techlefdir, exist_ok=True)
+            # All techlef files should be linked or copied, so use "glob"
+            # on the wildcards
+            techlist = glob.glob(sourcedir + '/' + option[1])
+
+            for lefname in techlist:
+                leffile = os.path.split(lefname)[1]
+                targname = techlefdir + '/' + leffile
+                checklefname = checklefdir + '/' + leffile
+                linklefname = linklefdir + '/' + leffile
+                # Remove any existing file(s)
+                if os.path.isfile(targname):
+                    os.remove(targname)
+                elif os.path.islink(targname):
+                    os.unlink(targname)
+                elif os.path.isdir(targname):
+                    shutil.rmtree(targname)
+
+                if do_install:
+                    if not link_from:
+                        if os.path.isfile(lefname):
+                            shutil.copy(lefname, targname)
+                        else:
+                            shutil.copytree(lefname, targname)
+                    elif link_from == 'source':
+                        os.symlink(lefname, targname)
+                    else:
+                        if os.path.exists(checklefname):
+                            os.symlink(linklefname, targname)
+                        elif os.path.isfile(lefname):
+                            shutil.copy(lefname, targname)
+                        else:
+                            shutil.copytree(lefname, targname)
+
+                    if filter_script:
+                        # Apply filter script to all files in the target directory
+                        tfilter(targname, filter_script)
+            optionlist.remove(option)
+
+        elif option[0] == 'models':
+            filter_script = None
+            for item in option:
+                if item.split('=')[0] == 'filter':
+                    filter_script = item.split('=')[1]
+                    break
+
+            print('Diagnostic:  installing models.')
+            modelsdir = targetdir + '/libs.tech/models'
+            checkmoddir = checkdir + '/libs.tech/models'
+            if link_from:
+                linkmoddir = link_from + '/libs.tech/models'
+            else:
+                linkmoddir = ''
+
+            os.makedirs(modelsdir, exist_ok=True)
+
+            # All model files should be linked or copied, so use "glob"
+            # on the wildcards.  Copy each file and recursively copy each
+            # directory.
+            modellist = glob.glob(sourcedir + '/' + option[1])
+
+            for modname in modellist:
+                modfile = os.path.split(modname)[1]
+                targname = modelsdir + '/' + modfile
+                checkmodname = checkmoddir + '/' + modfile
+                linkmodname = linkmoddir + '/' + modfile
+
+                if os.path.isdir(modname):
+                    # Remove any existing directory, and its contents
+                    if os.path.isdir(targname):
+                        shutil.rmtree(targname)
+                    os.makedirs(targname)
+
+                    # Recursively find and copy or link the whole directory
+                    # tree from this point.
+
+                    allmodlist = glob.glob(modname + '/**', recursive=True)
+                    commonpart = os.path.commonpath(allmodlist)
+                    for submodname in allmodlist:
+                        if os.path.isdir(submodname):
+                            continue
+                        # Get the path part that is not common between modlist and
+                        # allmodlist.
+                        subpart = os.path.relpath(submodname, commonpart)
+                        subtargname = targname + '/' + subpart
+                        os.makedirs(os.path.split(subtargname)[0], exist_ok=True)
+                        if do_install:
+                            if not link_from:
+                                if os.path.isfile(submodname):
+                                    shutil.copy(submodname, subtargname)
+                                else:
+                                    shutil.copytree(submodname, subtargname)
+                            elif link_from == 'source':
+                                os.symlink(submodname, subtargname)
+                            else:
+                                if os.path.exists(checkmodname):
+                                    os.symlink(linkmodname, subtargname)
+                                elif os.path.isfile(submodname):
+                                    shutil.copy(submodname, subtargname)
+                                else:
+                                    shutil.copytree(submodname, subtargname)
+                        
+                            if filter_script:
+                                # Apply filter script to all files in the target directory
+                                tfilter(targname, filter_script)
+
+                else:
+                    # Remove any existing file
+                    if os.path.isfile(targname):
+                        os.remove(targname)
+                    elif os.path.islink(targname):
+                        os.unlink(targname)
+                    elif os.path.isdir(targname):
+                        shutil.rmtree(targname)
+
+                    if do_install:
+                        if not link_from:
+                            if os.path.isfile(modname):
+                                shutil.copy(modname, targname)
+                            else:
+                                shutil.copytree(modname, targname)
+                        elif link_from == 'source':
+                            os.symlink(modname, targname)
+                        else:
+                            if os.path.isfile(checkmodname):
+                                os.symlink(linkmodname, targname)
+                            elif os.path.isfile(modname):
+                                shutil.copy(modname, targname)
+                            else:
+                                shutil.copytree(modname, targname)
+
+                        if filter_script:
+                            # Apply filter script to all files in the target directory
+                            tfilter(targname, filter_script)
+
+            optionlist.remove(option)
+
+    # The remaining options in optionlist should all be types like 'lef' or 'liberty'
+    for option in optionlist[:]:
+        # Diagnostic
+        if do_install:
+            print("Installing option: " + str(option[0]))
+        destdir = targetdir + '/libs.ref/' + option[0]
+        checklibdir = checkdir + '/libs.ref/' + option[0]
+        if link_from:
+            destlinkdir = link_from + '/libs.ref/' + option[0]
+        else:
+            destlinkdir = ''
+        os.makedirs(destdir, exist_ok=True)
+
+        # If the option is followed by the keyword "up" and a number, then
+        # the source should be copied (or linked) from <number> levels up
+        # in the hierarchy (see below).
+
+        if 'up' in option:
+            uparg = option.index('up') 
+            try:
+                hier_up = int(option[uparg + 1])
+            except:
+                print("Non-numeric option to 'up': " + option[uparg + 1])
+                print("Ignoring 'up' option.")
+                hier_up = 0
+        else:
+            hier_up = 0
+
+        filter_script = None
+        for item in option:
+            if item.split('=')[0] == 'filter':
+                filter_script = item.split('=')[1]
+                break
+
+        # Option 'compile' is a standalone keyword ('comp' may be used).
+        do_compile = 'compile' in option or 'comp' in option
+ 
+        # Option 'nospecify' is a standalone keyword ('nospec' may be used).
+        do_remove_spec = 'nospecify' in option or 'nospec' in option
+
+        # Check off things we need to do migration to magic database and
+        # abstact files.
+        if option[0] == 'lef':
+            have_lef = True
+        elif option[0] == 'gds':
+            have_gds = True
+        elif option[0] == 'lefanno':
+            have_lefanno = True
+        elif option[0] == 'spice':
+            have_spice = True
+        elif option[0] == 'cdl':
+            have_cdl = True
+
+        # For each library, create the library subdirectory
+        for library in libraries:
+            if len(library) == 3:
+                destlib = library[2]
+            else:
+                destlib = library[1]
+            destlibdir = destdir + '/' + destlib
+            destlinklibdir = destlinkdir + '/' + destlib
+            checksrclibdir = checklibdir + '/' + destlib
+            os.makedirs(destlibdir, exist_ok=True)
+
+            # Populate the library subdirectory
+            # Parse the option and replace each '/*/' with the library name,
+            # and check if it is a valid directory name.  Then glob the
+            # resulting option name.  Warning:  This assumes that all
+            # occurences of the text '/*/' match a library name.  It should
+            # be possible to wild-card the directory name in such a way that
+            # this is always true.
+
+            testopt = re.sub('\/\*\/', '/' + library[1] + '/', option[1])
+
+            liblist = glob.glob(sourcedir + '/' + testopt)
+
+            # Diagnostic
+            print('Collecting files from ' + str(sourcedir + '/' + testopt))
+            print('Files to install:')
+            if len(liblist) < 10:
+                for item in liblist:
+                    print('   ' + item)
+            else:
+                for item in liblist[0:4]:
+                    print('   ' + item)
+                print('   .')
+                print('   .')
+                print('   .')
+                for item in liblist[-6:-1]:
+                    print('   ' + item)
+                print('(' + str(len(liblist)) + ' files total)')
+
+            for libname in liblist:
+                # Note that there may be a hierarchy to the files in option[1],
+                # say for liberty timing files under different conditions, so
+                # make sure directories have been created as needed.
+
+                libfile = os.path.split(libname)[1]
+                libfilepath = os.path.split(libname)[0]
+                destpathcomp = []
+                for i in range(hier_up):
+                    destpathcomp.append('/' + os.path.split(libfilepath)[1])
+                    libfilepath = os.path.split(libfilepath)[0]
+                destpathcomp.reverse()
+                destpath = ''.join(destpathcomp)
+
+                targname = destlibdir + destpath + '/' + libfile
+
+                # NOTE:  When using "up" with link_from, could just make
+                # destpath itself a symbolic link;  this way is more flexible
+                # but adds one symbolic link per file.
+
+                if destpath != '':
+                    if not os.path.isdir(destlibdir + destpath):
+                        os.makedirs(destlibdir + destpath, exist_ok=True)
+
+                # Both linklibname and checklibname need to contain any hierarchy
+                # implied by the "up" option.
+
+                linklibname = destlinklibdir + destpath + '/' + libfile
+                checklibname = checksrclibdir + destpath + '/' + libfile
+
+                # Remove any existing file
+                if os.path.isfile(targname):
+                    os.remove(targname)
+                elif os.path.islink(targname):
+                    os.unlink(targname)
+                elif os.path.isdir(targname):
+                    shutil.rmtree(targname)
+
+                if do_install:
+                    if not link_from:
+                        if os.path.isfile(libname):
+                            shutil.copy(libname, targname)
+                        else:
+                            shutil.copytree(libname, targname)
+                    elif link_from == 'source':
+                        os.symlink(libname, targname)
+                    else:
+                        if os.path.exists(checklibname):
+                            os.symlink(linklibname, targname)
+                        elif os.path.isfile(libname):
+                            shutil.copy(libname, targname)
+                        else:
+                            shutil.copytree(libname, targname)
+
+                    if option[0] == 'verilog':
+                        # Special handling of verilog files to make them
+                        # syntactically acceptable to iverilog.
+                        # NOTE:  Perhaps this should be recast as a custom filter?
+                        vfilter(targname, do_remove_spec)
+
+                    if filter_script:
+                        # Apply filter script to all files in the target directory
+                        tfilter(targname, filter_script)
+
+            if do_compile == True:
+                # To do:  Extend this option to include formats other than verilog.
+                # Also to do:  Make this compatible with linking from another PDK.
+
+                if option[0] == 'verilog':
+                    # If there is not a single file with all verilog cells in it,
+                    # then compile one, because one does not want to have to have
+                    # an include line for every single cell used in a design.
+
+                    alllibname = destlibdir + '/' + destlib + '.v'
+
+                    print('Diagnostic:  Creating consolidated verilog library ' + destlib + '.v')
+                    vlist = glob.glob(destlibdir + '/*.v')
+                    if alllibname in vlist:
+                        vlist.remove(alllibname)
+
+                    if len(vlist) > 1:
+                        print('New file is:  ' + alllibname)
+                        with open(alllibname, 'w') as ofile:
+                            for vfile in vlist:
+                                with open(vfile, 'r') as ifile:
+                                    # print('Adding ' + vfile + ' to library.')
+                                    vtext = ifile.read()
+                                    # NOTE:  The following workaround resolves an
+                                    # issue with iverilog, which does not properly
+                                    # parse specify timing paths that are not in
+                                    # parentheses.  Easy to work around
+                                    vlines = re.sub(r'\)[ \t]*=[ \t]*([01]:[01]:[01])[ \t]*;', r') = ( \1 ) ;', vtext)
+                                    print(vlines, file=ofile)
+                                print('\n//--------EOF---------\n', file=ofile)
+                    else:
+                        print('Only one file (' + str(vlist) + ');  ignoring "compile" option.')
+
+    print("Completed installation of vendor files.")
+
+    # Now for the harder part.  If GDS and/or LEF databases were specified,
+    # then migrate them to magic (.mag files in layout/ or abstract/).
+
+    ignore = []
+    do_cdl_scaleu = False
+    for option in optionlist[:]:
+        if option[0] == 'cdl':
+            # Option 'scaleu' is a standalone keyword
+            do_cdl_scaleu = 'scaleu' in option
+
+            # Option 'ignore' has arguments after '='
+            for item in option:
+                if item.split('=')[0] == 'ignore':
+                    ignorelist = item.split('=')[1].split(',')
+ 
+    devlist = []
+    pdklibrary = None
+
+    if have_gds:
+        print("Migrating GDS files to layout.")
+        destdir = targetdir + '/libs.ref/mag'
+        srcdir = targetdir + '/libs.ref/gds'
+        os.makedirs(destdir, exist_ok=True)
+
+        # For each library, create the library subdirectory
+        for library in libraries:
+            if len(library) == 3:
+                destlib = library[2]
+            else:
+                destlib = library[1]
+            destlibdir = destdir + '/' + destlib
+            srclibdir = srcdir + '/' + destlib
+            os.makedirs(destlibdir, exist_ok=True)
+
+            # For primitive devices, check the PDK script and find the name
+            # of the library and get a list of supported devices.
+
+            if library[0] == 'primitive':
+                pdkscript = targetdir + mag_current + pdkname + '.tcl'
+                print('Searching for supported devices in PDK script ' + pdkscript + '.')
+
+                if os.path.isfile(pdkscript):
+                    librex = re.compile('^[ \t]*set[ \t]+PDKNAMESPACE[ \t]+([^ \t]+)$')
+                    devrex = re.compile('^[ \t]*proc[ \t]+([^ :\t]+)::([^ \t_]+)_defaults')
+                    fixrex = re.compile('^[ \t]*return[ \t]+\[([^ :\t]+)::fixed_draw[ \t]+([^ \t]+)[ \t]+')
+                    devlist = []
+                    fixedlist = []
+                    with open(pdkscript, 'r') as ifile:
+                        scripttext = ifile.read().splitlines()
+                        for line in scripttext:
+                            lmatch = librex.match(line)
+                            if lmatch:
+                                pdklibrary = lmatch.group(1)
+                            dmatch = devrex.match(line)
+                            if dmatch:
+                                if dmatch.group(1) == pdklibrary:
+                                    devlist.append(dmatch.group(2))
+                            fmatch = fixrex.match(line)
+                            if fmatch:
+                                if fmatch.group(1) == pdklibrary:
+                                    fixedlist.append(fmatch.group(2))
+
+                # Diagnostic
+                print("PDK library is " + str(pdklibrary))
+
+            # Link to the PDK magic startup file from the target directory
+            # If there is no -F version then look for one without -F (open source PDK)
+            startup_script = targetdir + mag_current + pdkname + '-F.magicrc'
+            if not os.path.isfile(startup_script):
+                startup_script = targetdir + mag_current + pdkname + '.magicrc'
+
+            if os.path.isfile(startup_script):
+                # If the symbolic link exists, remove it.
+                if os.path.isfile(destlibdir + '/.magicrc'):
+                    os.remove(destlibdir + '/.magicrc')
+                os.symlink(startup_script, destlibdir + '/.magicrc')
+ 
+                # Find GDS file names in the source
+                print('Getting GDS file list from ' + srclibdir + '.')
+                gdsfiles = os.listdir(srclibdir)
+
+                # Generate a script called "generate_magic.tcl" and leave it in
+                # the target directory.  Use it as input to magic to create the
+                # .mag files from the database.
+
+                print('Creating magic generation script.') 
+                with open(destlibdir + '/generate_magic.tcl', 'w') as ofile:
+                    print('#!/usr/bin/env wish', file=ofile)
+                    print('#--------------------------------------------', file=ofile)
+                    print('# Script to generate .mag files from .gds    ', file=ofile)
+                    print('#--------------------------------------------', file=ofile)
+                    print('gds readonly true', file=ofile)
+                    print('gds flatten true', file=ofile)
+                    # print('gds rescale false', file=ofile)
+                    print('tech unlock *', file=ofile)
+
+                    for gdsfile in gdsfiles:
+                        # Note:  DO NOT use a relative path here.
+                        # print('gds read ../../gds/' + destlib + '/' + gdsfile, file=ofile)
+                        print('gds read ' + srclibdir + '/' + gdsfile, file=ofile)
+
+                        # Make sure properties include the Tcl generated cell
+                        # information from the PDK script
+
+                        if pdklibrary:
+                            tclfixedlist = '{' + ' '.join(fixedlist) + '}'
+                            print('set devlist ' + tclfixedlist, file=ofile)
+                            print('set topcell [lindex [cellname list top] 0]',
+				    file=ofile)
+
+                            print('foreach cellname $devlist {', file=ofile)
+                            print('    load $cellname', file=ofile)
+                            print('    property gencell $cellname', file=ofile)
+                            print('    property parameter m=1', file=ofile)
+                            print('    property library ' + pdklibrary, file=ofile)
+                            print('}', file=ofile)
+                            print('load $topcell', file=ofile)
+
+                    print('writeall force', file=ofile)
+
+                    if have_lefanno:
+                        # Find LEF file names in the source
+                        lefsrcdir = targetdir + '/libs.ref/lefanno'
+                        lefsrclibdir = lefsrcdir + '/' + destlib
+                        leffiles = list(item for item in os.listdir(lefsrclibdir) if os.path.splitext(item)[1] == '.lef')
+
+                    if not have_lef:
+                        # This library has a GDS database but no LEF database.  Use
+                        # magic to create abstract views of the GDS cells.  If
+                        # option "-lefanno" is given, then read the LEF file after
+                        # loading the database file to annotate the cell with
+                        # information from the LEF file.  This usually indicates
+                        # that the LEF file has some weird definition of obstruction
+                        # layers and we want to normalize them by using magic's LEF
+                        # write procedure, but we still need the pin use and class
+                        # information from the LEF file, and maybe the bounding box.
+
+                        print('set maglist [glob *.mag]', file=ofile)
+                        print('foreach name $maglist {', file=ofile)
+                        print('   load [file root $name]', file=ofile)
+                        if have_lefanno:
+                            print('}', file=ofile)
+                            for leffile in leffiles:
+                                print('lef read ' + lefsrclibdir + '/' + leffile, file=ofile)
+                            print('foreach name $maglist {', file=ofile)
+                            print('   load [file root $name]', file=ofile)
+                        print('   lef write [file root $name]', file=ofile)
+                        print('}', file=ofile)
+                    print('quit -noprompt', file=ofile)
+
+                # Run magic to read in the GDS file and write out magic databases.
+                with open(destlibdir + '/generate_magic.tcl', 'r') as ifile:
+                    subprocess.run(['magic', '-dnull', '-noconsole'],
+				stdin = ifile, stdout = subprocess.PIPE,
+				stderr = subprocess.PIPE, cwd = destlibdir,
+				universal_newlines = True)
+
+                if not have_lef:
+                    # Remove the lefanno/ target and its contents.
+                    if have_lefanno:
+                        lefannosrcdir = targetdir + '/libs.ref/lefanno'
+                        if os.path.isdir(lefannosrcdir):
+                            shutil.rmtree(lefannosrcdir)
+
+                    destlefdir = targetdir + '/libs.ref/lef'
+                    destleflibdir = destlefdir + '/' + destlib
+                    os.makedirs(destleflibdir, exist_ok=True)
+                    leflist = list(item for item in os.listdir(destlibdir) if os.path.splitext(item)[1] == '.lef')
+
+                    # All macros will go into one file
+                    destleflib = destleflibdir + '/' + destlib + '.lef'
+                    # Remove any existing library file from the target directory
+                    if os.path.isfile(destleflib):
+                        os.remove(destleflib)
+
+                    first = True
+                    with open(destleflib, 'w') as ofile:
+                        for leffile in leflist:
+                            # Remove any existing single file from the target directory
+                            if os.path.isfile(destleflibdir + '/' + leffile):
+                                os.remove(destleflibdir + '/' + leffile)
+
+                            # Append contents
+                            sourcelef =  destlibdir + '/' + leffile
+                            with open(sourcelef, 'r') as ifile:
+                                leflines = ifile.read().splitlines()
+                                if not first:
+                                    # Remove header from all but the first file
+                                    leflines = leflines[8:]
+                                else:
+                                    first = False
+
+                            for line in leflines:
+                                print(line, file=ofile)
+
+                            # Remove file from the source directory
+                            os.remove(sourcelef)
+
+                    have_lef = True
+
+                # Remove the startup script and generation script
+                os.remove(destlibdir + '/.magicrc')
+                os.remove(destlibdir + '/generate_magic.tcl')
+            else:
+                print("Master PDK magic startup file not found.  Did you install")
+                print("PDK tech files before PDK vendor files?")
+
+    if have_lef:
+        print("Migrating LEF files to layout.")
+        destdir = targetdir + '/libs.ref/maglef'
+        srcdir = targetdir + '/libs.ref/lef'
+        magdir = targetdir + '/libs.ref/mag'
+        cdldir = targetdir + '/libs.ref/cdl'
+        os.makedirs(destdir, exist_ok=True)
+
+        # For each library, create the library subdirectory
+        for library in libraries:
+            if len(library) == 3:
+                destlib = library[2]
+            else:
+                destlib = library[1]
+            destlibdir = destdir + '/' + destlib
+            srclibdir = srcdir + '/' + destlib
+            maglibdir = magdir + '/' + destlib
+            cdllibdir = cdldir + '/' + destlib
+            os.makedirs(destlibdir, exist_ok=True)
+
+            # Link to the PDK magic startup file from the target directory
+            startup_script = targetdir + mag_current + pdkname + '-F.magicrc'
+            if not os.path.isfile(startup_script):
+                startup_script = targetdir + mag_current + pdkname + '.magicrc'
+
+            if os.path.isfile(startup_script):
+                # If the symbolic link exists, remove it.
+                if os.path.isfile(destlibdir + '/.magicrc'):
+                    os.remove(destlibdir + '/.magicrc')
+                os.symlink(startup_script, destlibdir + '/.magicrc')
+ 
+                # Find LEF file names in the source
+                leffiles = os.listdir(srclibdir)
+
+                # Generate a script called "generate_magic.tcl" and leave it in
+                # the target directory.  Use it as input to magic to create the
+                # .mag files from the database.
+
+                with open(destlibdir + '/generate_magic.tcl', 'w') as ofile:
+                    print('#!/usr/bin/env wish', file=ofile)
+                    print('#--------------------------------------------', file=ofile)
+                    print('# Script to generate .mag files from .lef    ', file=ofile)
+                    print('#--------------------------------------------', file=ofile)
+                    print('tech unlock *', file=ofile)
+
+                    if pdklibrary:
+                        tcldevlist = '{' + ' '.join(devlist) + '}'
+                        print('set devlist ' + tcldevlist, file=ofile)
+
+                    for leffile in leffiles:
+
+                        # Okay to use a relative path here.
+                        # print('lef read ' + srclibdir + '/' + leffile', file=ofile)
+                        print('lef read ../../lef/' + destlib + '/' + leffile, file=ofile)
+
+                        # To be completed:  Parse SPICE file for port order, make
+                        # sure ports are present and ordered.
+
+                        if pdklibrary:
+                            print('set cellname [file root ' + leffile + ']', file=ofile)
+                            print('if {[lsearch $devlist $cellname] >= 0} {',
+					file=ofile)
+                            print('    load $cellname', file=ofile)
+                            print('    property gencell $cellname', file=ofile)
+                            print('    property parameter m=1', file=ofile)
+                            print('    property library ' + pdklibrary, file=ofile)
+                            print('}', file=ofile)
+
+                    print('writeall force', file=ofile)
+                    print('quit -noprompt', file=ofile)
+
+                # Run magic to read in the LEF file and write out magic databases.
+                with open(destlibdir + '/generate_magic.tcl', 'r') as ifile:
+                    subprocess.run(['magic', '-dnull', '-noconsole'],
+				stdin = ifile, stdout = subprocess.PIPE,
+				stderr = subprocess.PIPE, cwd = destlibdir,
+				universal_newlines = True)
+
+                # Now list all the .mag files generated, and for each, read the
+                # corresponding file from the mag/ directory, pull the GDS file
+                # properties, and add those properties to the maglef view.  Also
+                # read the CDL (or SPICE) netlist, read the ports, and rewrite
+                # the port order in the mag and maglef file accordingly.
+
+                # Diagnostic
+                print('Annotating files in ' + destlibdir)
+                magfiles = os.listdir(destlibdir)
+                for magroot in magfiles:
+                    magname = os.path.splitext(magroot)[0]
+                    magfile = maglibdir + '/' + magroot
+                    magleffile = destlibdir + '/' + magroot
+                    prop_lines = get_gds_properties(magfile)
+
+                    # Make sure properties include the Tcl generated cell
+                    # information from the PDK script
+
+                    if pdklibrary:
+                        if magname in fixedlist:
+                            prop_lines.append('string gencell ' + magname)
+                            prop_lines.append('string library ' + pdklibrary)
+                            prop_lines.append('string parameter m=1')
+
+                    cdlfile = cdllibdir + '/' + magname + '.cdl'
+                    if not os.path.exists(cdlfile):
+                        # Assume there is one file with all cell subcircuits in it.
+                        try:
+                            cdlfile = glob.glob(cdllibdir + '/*.cdl')[0]
+                        except:
+                            print('No CDL file for ' + destlib + ' device ' + magname)
+                            cdlfile = None
+                            # To be done:  If destlib is 'primitive', then look in
+                            # SPICE models for port order.
+                            if destlib == 'primitive':
+                                print('Fix me:  Need to look in SPICE models!')
+                    if cdlfile:
+                        port_dict = get_subckt_ports(cdlfile, magname)
+                    else:
+                        port_dict = {}
+
+                    proprex = re.compile('<< properties >>')
+                    endrex = re.compile('<< end >>')
+                    rlabrex = re.compile('rlabel[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+([^ \t]+)')
+                    flabrex = re.compile('flabel[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+[^ \t]+[ \t]+([^ \t]+)')
+                    portrex = re.compile('port[ \t]+([^ \t])+[ \t]+(.*)')
+                    portnum = -1
+
+                    with open(magleffile, 'r') as ifile:
+                        magtext = ifile.read().splitlines()
+
+                    with open(magleffile, 'w') as ofile:
+                        for line in magtext:
+                            tmatch = portrex.match(line)
+                            if tmatch:
+                                if portnum >= 0:
+                                    line = 'port ' + str(portnum) + ' ' + tmatch.group(2)
+                                else:
+                                    line = 'port ' + tmatch.group(1) + ' ' + tmatch.group(2)
+                            ematch = endrex.match(line)
+                            if ematch and len(prop_lines) > 0:
+                                print('<< properties >>', file=ofile)
+                                for prop in prop_lines:
+                                    print('string ' + prop, file=ofile)
+
+                            print(line, file=ofile)
+                            pmatch = proprex.match(line)
+                            if pmatch:
+                                for prop in prop_lines:
+                                    print('string ' + prop, file=ofile)
+                                prop_lines = []
+
+                            lmatch = flabrex.match(line)
+                            if not lmatch:
+                                lmatch = rlabrex.match(line)
+                            if lmatch:
+                                labname = lmatch.group(1).lower()
+                                try:
+                                    portnum = port_dict[labname]
+                                except:
+                                    portnum = -1
+
+                    if os.path.exists(magfile):
+                        with open(magfile, 'r') as ifile:
+                            magtext = ifile.read().splitlines()
+
+                        with open(magfile, 'w') as ofile:
+                            for line in magtext:
+                                tmatch = portrex.match(line)
+                                if tmatch:
+                                    if portnum >= 0:
+                                        line = 'port ' + str(portnum) + ' ' + tmatch.group(2)
+                                    else:
+                                        line = 'port ' + tmatch.group(1) + ' ' + tmatch.group(2)
+                                ematch = endrex.match(line)
+                                print(line, file=ofile)
+                                lmatch = flabrex.match(line)
+                                if not lmatch:
+                                    lmatch = rlabrex.match(line)
+                                if lmatch:
+                                    labname = lmatch.group(1).lower()
+                                    try:
+                                        portnum = port_dict[labname]
+                                    except:
+                                        portnum = -1
+                    elif os.path.splitext(magfile)[1] == '.mag':
+                        # NOTE:  Probably this means the GDS cell has a different name.
+                        print('Error: No file ' + magfile + '.  Why is it in maglef???')
+
+                # Remove the startup script and generation script
+                os.remove(destlibdir + '/.magicrc')
+                os.remove(destlibdir + '/generate_magic.tcl')
+            else:
+                print("Master PDK magic startup file not found.  Did you install")
+                print("PDK tech files before PDK vendor files?")
+
+    # If SPICE or CDL databases were specified, then convert them to
+    # a form that can be used by ngspice, using the cdl2spi.py script 
+
+    if have_spice:
+        if not os.path.isdir(targetdir + '/libs.ref/spi'):
+            os.makedirs(targetdir + '/libs.ref/spi', exist_ok=True)
+
+    elif have_cdl:
+        if not os.path.isdir(targetdir + '/libs.ref/spi'):
+            os.makedirs(targetdir + '/libs.ref/spi', exist_ok=True)
+
+        print("Migrating CDL netlists to SPICE.")
+        destdir = targetdir + '/libs.ref/spi'
+        srcdir = targetdir + '/libs.ref/cdl'
+        os.makedirs(destdir, exist_ok=True)
+
+        # For each library, create the library subdirectory
+        for library in libraries:
+            if len(library) == 3:
+                destlib = library[2]
+            else:
+                destlib = library[1]
+            destlibdir = destdir + '/' + destlib
+            srclibdir = srcdir + '/' + destlib
+            os.makedirs(destlibdir, exist_ok=True)
+
+            # Find CDL file names in the source
+            cdlfiles = os.listdir(srclibdir)
+
+            # The directory with scripts should be in ../common with respect
+            # to the Makefile that determines the cwd.
+            scriptdir = os.path.split(os.getcwd())[0] + '/common/'
+
+            # Run cdl2spi.py script to read in the CDL file and write out SPICE
+            for cdlfile in cdlfiles:
+                spiname = os.path.splitext(cdlfile)[0] + '.spi'
+                procopts = [scriptdir + 'cdl2spi.py', srclibdir + '/' + cdlfile, destlibdir + '/' + spiname]
+                if do_cdl_scaleu:
+                    procopts.append('-dscale=u')
+                for item in ignorelist:
+                    procopts.append('-ignore=' + item)
+                print('Running (in ' + destlibdir + '): ' + ' '.join(procopts))
+                subprocess.run(procopts,
+			stdin = subprocess.DEVNULL, stdout = subprocess.PIPE,
+			stderr = subprocess.PIPE, cwd = destlibdir,
+			universal_newlines = True)
+
+    sys.exit(0)
diff --git a/common/padframe_generator.py b/common/padframe_generator.py
new file mode 120000
index 0000000..c87a0c5
--- /dev/null
+++ b/common/padframe_generator.py
@@ -0,0 +1 @@
+soc_floorplanner.py
\ No newline at end of file
diff --git a/common/pdk.bindkeys b/common/pdk.bindkeys
new file mode 100644
index 0000000..1b502db
--- /dev/null
+++ b/common/pdk.bindkeys
@@ -0,0 +1,186 @@
+#
+# Cadence-compatibility bindings except where marked.
+#
+macro f  "view"	     ;# zoom to fit window
+macro ^z "zoom 0.5"  ;# zoom in
+macro Z  "zoom 2"    ;# zoom out
+macro B  "popstack"  ;# up hierarchy
+macro X  {pushstack [cellname list self]}	;# down hierarchy
+macro x  "edit"		;# down hierarchy, edit-in-place
+macro b  "select top cell ; edit"  ;# up hierarchy from edit-in-place
+macro p  "tool wire; magic::trackwire %W pick"	;# path
+macro ^r "redraw"
+macro ^f "unexpand"
+macro F  "expand"
+macro ^a "select area"
+macro ^d "select clear"
+macro k  "magic::measure"
+macro K  "magic::unmeasure"
+macro i  "magic::promptload getcell"
+macro l  "magic::update_texthelper ; wm deiconify .texthelper ; raise .texthelper"
+macro O  "magic::clock"
+macro <del> "magic::delete"
+
+# Toolkit parameter dialog
+macro q "magic::gencell {} ; raise .params"
+#
+# The following should be already implemented as existing Magic bindings
+#
+macro u "undo"
+macro U "redo"
+macro m "move"
+macro c "copy"
+#
+# Compatibility with Electric;  Cadence bindings are on function keys and
+# do not work through the VNC.
+macro ^s "magic::promptsave magic"	;# save dialog menu
+
+#
+# Bertrand's bindings follow except where marked.
+#
+macro < sideways
+macro ^ upsidedown
+#
+# Set grid at 1 micron
+#
+macro 0  "grid on ; grid 1l"	;# Grid at 0.5um (1 lambda)
+# macro ^f "feedback clear"	;# conflicts with Cadence binding
+#
+# Paint/Erase macros 
+#
+macro 1 "paint m1"
+macro ! "erase m1"
+macro 2 "paint m2"
+macro @ "erase m2"
+macro 3 "paint m3"
+macro # "erase m3"
+#ifdef METAL4
+macro 4 "paint mtp"
+macro $ "erase mtp"
+#endif
+#ifdef METAL5
+macro 4 "paint m4"
+macro $ "erase m4"
+macro 5 "paint mtp"
+macro % "erase mtp"
+#endif
+#ifdef METAL6
+macro 4 "paint m4"
+macro $ "erase m4"
+macro 5 "paint m5"
+macro % "erase m5"
+macro 6 "paint mtp"
+macro ^ "erase mtp"
+#endif
+
+macro 7 "paint poly"
+# macro & "erase poly"
+# macro p "paint pdiff"
+macro n "paint ndiff"
+# macro l "erase labels"
+macro P "erase pdiff"
+macro N "erase ndiff"
+macro y "drc check; drc why"
+macro ? "select area; what"
+
+macro / "expand toggle"
+macro ^w "writeall force"
+macro ^e "edit"
+# macro ^x "quit"
+
+macro z "findbox zoom"
+# "f" conflicts with Cadence "full view", so use control-i to select cells.
+# macro f "select cell"
+macro ^i "select cell"
+
+# Leave keypad bindings as-is, further down.  However, keypad
+# keys generally don't translate through the VNC session, so
+# use the following arrow key bindings:
+#
+#                         no shift  shift
+# arrows only -> Pan       10%      100%
+# with alt    -> Move      1 lambda 1 grid
+# with ctrl   -> Stretch   1 lambda 1 grid
+# 
+# Pan 10 percent of the window size with arrows
+# macro XK_Left  "scroll l .1 w"
+# macro XK_Up    "scroll u .1 w"
+# macro XK_Right "scroll r .1 w"
+# macro XK_Down  "scroll d .1 w"
+
+# Pan 100 percent of the window size with arrows
+# macro Shift_XK_Left  "scroll l 1 w"
+# macro Shift_XK_Up    "scroll u 1 w"
+# macro Shift_XK_Right "scroll r 1 w"
+# macro Shift_XK_Down  "scroll d 1 w"
+
+# move 0.05um with arrows
+# macro Alt_XK_Left          "move l 1l"
+# macro Alt_XK_Right         "move r 1l"
+# macro Alt_XK_Up            "move u 1l"
+# macro Alt_XK_Down          "move d 1l"
+
+# move 1 grid unit with arrows
+# macro Alt_Shift_XK_Left  "move l 1g"
+# macro Alt_Shift_XK_Right "move r 1g"
+# macro Alt_Shift_XK_Up    "move u 1g"
+# macro Alt_Shift_XK_Down  "move d 1g"
+
+# stretch 0.05um with arrows
+# macro Control_XK_Left          "stretch l 1l"
+# macro Control_XK_Right         "stretch r 1l"
+# macro Control_XK_Up            "stretch u 1l"
+# macro Control_XK_Down          "stretch d 1l"
+
+# stretch 1 grid unit with arrows
+# macro Control_Shift_XK_Left  "stretch l 1g"
+# macro Control_Shift_XK_Right "stretch r 1g"
+# macro Control_Shift_XK_Up    "stretch u 1g"
+# macro Control_Shift_XK_Down  "stretch d 1g"
+
+# shift mouse wheel bindings for right-left shift
+macro Shift_XK_Pointer_Button4 "scroll r .05 w"
+macro Shift_XK_Pointer_Button5 "scroll l .05 w"
+
+# control mouse wheel bindings for zoom in/out
+macro Control_XK_Pointer_Button4 "zoom 0.70711"
+macro Control_XK_Pointer_Button5 "zoom 1.41421"
+
+# Bertrand's original arrow macros
+# move 1 grid unit with arrows
+macro XK_Left          "move l 1g"
+macro XK_Right         "move r 1g"
+macro XK_Up            "move u 1g"
+macro XK_Down          "move d 1g"
+
+# move 0.05um with arrows
+macro Control_XK_Left  "move l 1l"
+macro Control_XK_Right "move r 1l"
+macro Control_XK_Up    "move u 1l"
+macro Control_XK_Down  "move d 1l"
+
+# stretch 1 grid unit with arrows
+macro Shift_XK_Left          "stretch l 1g"
+macro Shift_XK_Right         "stretch r 1g"
+macro Shift_XK_Up            "stretch u 1g"
+macro Shift_XK_Down          "stretch d 1g"
+
+# stretch 0.05um with arrows
+macro Control_Shift_XK_Left  "stretch l 1l"
+macro Control_Shift_XK_Right "stretch r 1l"
+macro Control_Shift_XK_Up    "stretch u 1l"
+macro Control_Shift_XK_Down  "stretch d 1l"
+
+# Restore pan function on Alt-key
+# Pan 10 percent of the window size with arrows
+macro Alt_XK_Left  "scroll l .1 w"
+macro Alt_XK_Up    "scroll u .1 w"
+macro Alt_XK_Right "scroll r .1 w"
+macro Alt_XK_Down  "scroll d .1 w"
+
+# Pan 100 percent of the window size with arrows
+macro Alt_Shift_XK_Left  "scroll l 1 w"
+macro Alt_Shift_XK_Up    "scroll u 1 w"
+macro Alt_Shift_XK_Right "scroll r 1 w"
+macro Alt_Shift_XK_Down  "scroll d 1 w"
+
diff --git a/common/pdk.prm b/common/pdk.prm
new file mode 100644
index 0000000..719eb74
--- /dev/null
+++ b/common/pdk.prm
@@ -0,0 +1,26 @@
+; TODO: make changes to this file for TECHNAME?
+; configuration file for TECHNAME (left same as osu035, 0.35um process)
+; Note that these values are totally bogus!
+;
+
+lambda  0.01    ; length scaling, microns (1 lambda = 1 centimicron)
+
+capga   .0115 ; gate capacitance, pF/micron^2
+
+capda 0.0012
+capdp 0.0013
+cappda 0.00260
+cappdp 0.00090
+
+lowthresh  0.5  ; logic low threshold as a normalized voltage
+highthresh 0.5  ; logic high threshold as a normalized voltage
+
+cntpullup 0     ; irrelevant, cmos technology; no depletion transistors
+diffperim 0     ; don't include diffusion perimeters for sidewall cap.
+subparea 0      ; poly over transistor won't count as part pf bulk-poly cap.
+diffext  0      ; diffusion extension for each transistor
+
+resistance n-channel dynamic-low    2     0.4   1844.70
+resistance p-channel dynamic-high   6.2   0.4   1489.10
+resistance n-channel static         2     0.4   2203.94
+resistance p-channel static         6.2   0.4   1693.37
diff --git a/common/pdk.tcl b/common/pdk.tcl
new file mode 100644
index 0000000..04f32a6
--- /dev/null
+++ b/common/pdk.tcl
@@ -0,0 +1,274 @@
+#-------------------------------------------------------------------
+# General-purpose routines for the PDK script in all technologies
+#-------------------------------------------------------------------
+# 
+#----------------------------------------
+# Number Conversion Functions
+#----------------------------------------
+
+#---------------------
+# Microns to Lambda
+#---------------------
+proc magic::u2l {micron} {
+    set techlambda [magic::tech lambda]
+    set tech1 [lindex $techlambda 1]
+    set tech0 [lindex $techlambda 0]
+    set tscale [expr {$tech1 / $tech0}]
+    set lambdaout [expr {((int([magic::cif scale output] * 10000)) / 10000.0)}]
+    return [expr $micron / ($lambdaout*$tscale) ]
+}
+
+#---------------------
+# Lambda to Microns
+#---------------------
+proc magic::l2u {lambda} {
+    set techlambda [magic::tech lambda]
+    set tech1 [lindex $techlambda 1] ; set tech0 [lindex $techlambda 0]
+    set tscale [expr {$tech1 / $tech0}]
+    set lambdaout [expr {((int([magic::cif scale output] * 10000)) / 10000.0)}]
+    return [expr $lambda * $lambdaout * $tscale ]
+}
+
+#---------------------
+# Internal to Microns
+#---------------------
+proc magic::i2u { value } {
+    return [expr {((int([magic::cif scale output] * 10000)) / 10000.0) * $value}]
+}
+
+#---------------------
+# Microns to Internal
+#---------------------
+proc magic::u2i {value} {
+    return [expr {$value / ((int([magic::cif scale output] * 10000)) / 10000.0)}]
+}
+
+#---------------------
+# Float to Spice 
+#---------------------
+proc magic::float2spice {value} { 
+    if {$value >= 1.0e+6} { 
+	set exponent 1e+6
+	set unit "meg"
+    } elseif {$value >= 1.0e+3} { 
+	set exponent 1e+3
+	set unit "k"
+    } elseif { $value >= 1} { 
+	set exponent 1
+	set unit ""
+    } elseif {$value >= 1.0e-3} { 
+	set exponent 1e-3
+	set unit "m"
+    } elseif {$value >= 1.0e-6} { 
+	set exponent 1e-6
+	set unit "u"
+    } elseif {$value >= 1.0e-9} { 
+	set exponent 1e-9
+	set unit "n"
+    } elseif {$value >= 1.0e-12} { 
+	set exponent 1e-12
+	set unit "p"
+    } elseif {$value >= 1.0e-15} { 
+	set exponent 1e-15
+	set unit "f"
+    } else {
+	set exponent 1e-18
+	set unit "a"
+    }
+    set val [expr $value / $exponent]
+    set val [expr int($val * 1000) / 1000.0]
+    if {$val == 0} {set unit ""}
+    return $val$unit
+}
+
+#---------------------
+# Spice to Float
+#---------------------
+proc magic::spice2float {value {faultval 0.0}} { 
+    # Remove trailing units, at least for some common combinations
+    set value [string tolower $value]
+    set value [string map {um u nm n uF n nF n pF p aF a} $value]
+    set value [string map {meg "* 1.0e6" k "* 1.0e3" m "* 1.0e-3" u "* 1.0e-6" \
+		 n "* 1.0 e-9" p "* 1.0e-12" f "* 1.0e-15" a "* 1.0e-15"} $value]
+    if {[catch {set rval [expr $value]}]} {
+	puts stderr "Value is not numeric!"
+	set rval $faultval
+    }
+    return $rval
+}
+
+#---------------------
+# Numeric Precision
+#---------------------
+proc magic::3digitpastdecimal {value} {
+    set new [expr int([expr $value * 1000 + 0.5 ]) / 1000.0]
+    return $new
+}
+
+#-------------------------------------------------------------------
+# File Access Functions
+#-------------------------------------------------------------------
+
+#-------------------------------------------------------------------
+# Ensures that a cell name does not already exist, either in
+# memory or on disk. Modifies the name until it does.
+#-------------------------------------------------------------------
+proc magic:cellnameunique {cellname} {
+    set i 0
+    set newname $cellname
+    while {[cellname list exists $newname] != 0 || [magic::searchcellondisk $newname] != 0} {
+	incr i
+	set newname ${cellname}_$i
+    }
+    return $newname
+}
+
+#-------------------------------------------------------------------
+# Looks to see if a cell exists on disk
+#-------------------------------------------------------------------
+proc magic::searchcellondisk {name} {
+    set rlist {}
+    foreach dir [path search] {
+	set ftry [file join $dir ${name}.mag]
+	if [file exists $ftry] {
+	    return 1
+	}
+    }
+    return 0
+} 
+
+#-------------------------------------------------------------------
+# Checks to see if a cell already exists on disk or in memory
+#-------------------------------------------------------------------
+proc magic::iscellnameunique {cellname} {
+    if {[cellname list exists $cellname] == 0 && [magic::searchcellondisk $cellname] == 0} { 
+	return 1
+    } else {
+	return 0
+    }
+}
+
+#--------------------------------------------------------------
+# Procedure that checks the user's "ip" subdirectory on startup
+# and adds each one's maglef subdirectory to the path.
+#--------------------------------------------------------------
+
+proc magic::query_mylib_ip {} {
+    global TECHPATH
+    global env
+    if [catch {set home $env(SUDO_USER)}] {
+        set home $env(USER)
+    }
+    set homedir /home/${home}
+    set ip_dirs [glob -directory ${homedir}/design/ip *]
+    set proj_dir [pwd]
+    set config_dir .config
+    set info_dir ${proj_dir}/${config_dir}
+    if {![file exists ${info_dir}]} {
+	set config_dir .ef-config
+	set info_dir ${proj_dir}/${config_dir}
+    }
+
+    set info_file ${info_dir}/info
+    set depends [dict create]
+    if {![catch {open $info_file r} ifd]} {
+        set depsec false
+        while {[gets $ifd line] >= 0} {
+	    if {[string first dependencies: $line] >= 0} {
+	        set depsec true
+	    }
+	    if {$depsec} {
+		if {[string first version: $line] >= 0} {
+		    if {$ipname != ""} {
+			set ipvers [string trim [lindex [split $line] 1] ']
+			dict set depends $ipname $ipvers
+			set ipname ""
+		    } else {
+			puts stderr "Badly formatted info file in ${config_dir}!"
+		    }
+		} else {
+		    set ipname [string trim $line :]
+		}
+	    }
+	}
+    }
+
+    foreach dir $ip_dirs {
+	# Version handling:  version dependencies are found in
+	# ${config_dir}/info.  For all other IP, use the most recent
+	# version number.
+	set ipname [lindex [file split $dir] end]
+	if {![catch {set version [dict get $depends $ipname]}]} {
+	    if {[file isdirectory ${dir}/${version}/maglef]} {
+		addpath ${dir}/${version}/maglef
+		continue
+	    } else {
+		puts stderr "ERROR:  Dependency ${ipname} version ${version} does not exist"
+	    }
+	}
+
+	# Secondary directory is the version number.  Use the highest
+	# version available.
+
+	set sub_dirs {}
+        catch {set sub_dirs [glob -directory $dir *]}
+	set maxver 0.0
+	foreach subdir $sub_dirs {
+	    set vidx [string last / $subdir]
+	    incr vidx
+	    set version [string range $subdir $vidx end]
+	    if {$version > $maxver} {
+		set maxver $version
+	    }
+	}
+	if {[file exists ${dir}/${maxver}/maglef]} {
+	    # Compatibility rule:  foundry name must match.
+	    # Get foundry name from ${config_dir}/techdir symbolic link reference
+	    if {[file exists ${dir}/${maxver}/${config_dir}/techdir]} {
+		set technodedir [file link ${dir}/${maxver}/${config_dir}/techdir]
+		set nidx [string last / $technodedir]
+		set techdir [string range $technodedir 0 $nidx-1]
+		if {$techdir == $TECHPATH} {
+		    addpath ${dir}/${maxver}/maglef
+		}
+	    }
+	}
+    }
+}
+
+#--------------------------------------------------------------
+# Procedure that checks the user's design directory on startup
+# and adds each one's mag subdirectory to the path.
+#--------------------------------------------------------------
+
+proc magic::query_my_projects {} {
+    global TECHPATH
+    global env
+    if [catch {set home $env(SUDO_USER)}] {
+        set home $env(USER)
+    }
+    set homedir /home/${home}
+    set proj_dirs [glob -directory ${homedir}/design *]
+    foreach dir $proj_dirs {
+	# Compatibility rule:  foundry name must match.
+	# Get foundry name from ${config_dir}/techdir symbolic link reference
+	if {[file exists ${dir}/mag]} {
+	    set config_dir .config
+	    set tech_dir ${dir}/${config_dir}
+	    if {![file exists ${tech_dir}]} {
+		set config_dir .ef-config
+		set tech_dir ${dir}/${config_dir}
+	    }
+	    if {[file exists ${dir}/${config_dir}/techdir]} {
+		set technodedir [file link ${dir}/${config_dir}/techdir]
+		set nidx [string last / $technodedir]
+		set techdir [string range $technodedir 0 $nidx-1]
+		if {$techdir == $TECHPATH} {
+		    addpath ${dir}/mag
+		}
+	    }
+	}
+    }
+}
+
+#----------------------------------------------------------------
diff --git a/common/preproc.py b/common/preproc.py
new file mode 100755
index 0000000..1fca5be
--- /dev/null
+++ b/common/preproc.py
@@ -0,0 +1,576 @@
+#!/usr/bin/env python3
+#--------------------------------------------------------------------
+#
+# preproc.py
+#
+# General purpose macro preprocessor
+#
+#--------------------------------------------------------------------
+# Usage:
+#
+#	preproc.py input_file [output_file] [-D<variable> ...]
+#
+# Where <variable> may be a keyword or a key=value pair
+#
+# Syntax:  Basically like cpp.  However, this preprocessor handles
+# only a limited set of keywords, so it does not otherwise mangle
+# the file in the belief that it must be C code.  Handling of boolean
+# relations is important, so these are thoroughly defined (see below)
+#
+#	#if defined(<variable>) [...]
+#	#ifdef <variable>
+#	#ifndef <variable>
+#	#elseif <variable>
+#	#else
+#	#endif
+#
+#	#define <variable> [...]
+#	#define <variable>(<parameters>) [...]
+#	#undef <variable>
+#
+#	#include <filename>
+#
+# <variable> may be
+#	<keyword>
+#	<keyword>=<value>
+#
+#	<keyword> without '=' is effectively the same as <keyword>=1
+#	Lack of a keyword is equivalent to <keyword>=0, in a conditional.
+#
+# Boolean operators (in order of precedence):
+#	!	NOT
+#	&&	AND
+#	||	OR	
+#
+# Comments:
+#       Most comments (C-like or Tcl-like) are output as-is.  A
+#	line beginning with "###" is treated as a preprocessor
+#	comment and is not copied to the output.
+#
+# Examples;
+#	#if defined(X) || defined(Y)
+#	#else
+#	#if defined(Z)
+#	#endif
+#--------------------------------------------------------------------
+
+import re
+import sys
+
+def solve_statement(condition):
+
+    defrex = re.compile('defined[ \t]*\(([^\)]+)\)')
+    orrex = re.compile('(.+)\|\|(.+)')
+    andrex = re.compile('(.+)&&(.+)')
+    notrex = re.compile('!([^&\|]+)')
+    parenrex = re.compile('\(([^\)]+)\)')
+    leadspacerex = re.compile('^[ \t]+(.*)')
+    endspacerex = re.compile('(.*)[ \t]+$')
+
+    matchfound = True
+    while matchfound:
+        matchfound = False
+
+        # Search for defined(K) (K must be a single keyword)
+        # If the keyword was defined, then it should have been replaced by 1
+        lmatch = defrex.search(condition)
+        if lmatch:
+            key = lmatch.group(1)
+            if key == 1 or key == '1' or key == True:
+                repl = 1
+            else:
+                repl = 0
+
+            condition = defrex.sub(str(repl), condition)
+            matchfound = True
+
+        # Search for (X) recursively
+        lmatch = parenrex.search(condition)
+        if lmatch:
+            repl = solve_statement(lmatch.group(1))
+            condition = parenrex.sub(str(repl), condition)
+            matchfound = True
+
+        # Search for !X recursively
+        lmatch = notrex.search(condition)
+        if lmatch:
+            only = solve_statement(lmatch.group(1))
+            if only == '1':
+                repl = '0'
+            else:
+                repl = '1'
+            condition = notrex.sub(str(repl), condition)
+            matchfound = True
+
+        # Search for A&&B recursively
+        lmatch = andrex.search(condition)
+        if lmatch:
+            first = solve_statement(lmatch.group(1))
+            second = solve_statement(lmatch.group(2))
+            if first == '1' and second == '1':
+                repl = '1'
+            else:
+                repl = '0'
+            condition = andrex.sub(str(repl), condition)
+            matchfound = True
+
+        # Search for A||B recursively
+        lmatch = orrex.search(condition)
+        if lmatch:
+            first = solve_statement(lmatch.group(1))
+            second = solve_statement(lmatch.group(2))
+            if first == '1' or second == '1':
+                repl = '1'
+            else:
+                repl = '0'
+            condition = orrex.sub(str(repl), condition)
+            matchfound = True
+ 
+    # Remove whitespace
+    lmatch = leadspacerex.match(condition)
+    if lmatch:
+        condition = lmatch.group(1)
+    lmatch = endspacerex.match(condition)
+    if lmatch:
+        condition = lmatch.group(1)
+    
+    return condition
+
+def solve_condition(condition, keys, defines, keyrex):
+    # Do definition replacement on the conditional
+    for keyword in keys:
+        condition = keyrex[keyword].sub(defines[keyword], condition)
+
+    value = solve_statement(condition)
+    if value == '1':
+        return 1
+    else:
+        return 0
+
+def sortkeys(keys):
+    newkeys = []
+    for i in range(0, len(keys)):
+        keyword = keys[i]
+        found = False
+        for j in range(0, len(newkeys)):
+            inword = newkeys[j]
+            if inword in keyword:
+                # Insert keyword before inword
+                newkeys.insert(j, keyword)
+                found = True
+                break
+        if not found:
+            newkeys.append(keyword)
+    return newkeys
+
+def runpp(keys, keyrex, defines, ccomm, incdirs, inputfile, ofile):
+
+    includerex = re.compile('^[ \t]*#include[ \t]+"*([^ \t\n\r"]+)')
+    definerex = re.compile('^[ \t]*#define[ \t]+([^ \t]+)[ \t]+(.+)')
+    paramrex = re.compile('^([^\(]+)\(([^\)]+)\)')
+    defrex = re.compile('^[ \t]*#define[ \t]+([^ \t\n\r]+)')
+    undefrex = re.compile('^[ \t]*#undef[ \t]+([^ \t\n\r]+)')
+    ifdefrex = re.compile('^[ \t]*#ifdef[ \t]+(.+)')
+    ifndefrex = re.compile('^[ \t]*#ifndef[ \t]+(.+)')
+    ifrex = re.compile('^[ \t]*#if[ \t]+(.+)')
+    elseifrex = re.compile('^[ \t]*#elseif[ \t]+(.+)')
+    elserex = re.compile('^[ \t]*#else')
+    endifrex = re.compile('^[ \t]*#endif')
+    commentrex = re.compile('^###[^#]*$')
+    ccstartrex = re.compile('/\*')		# C-style comment start
+    ccendrex = re.compile('\*/')		# C-style comment end
+    contrex = re.compile('.*\\\\$')		# Backslash continuation line
+
+    badifrex = re.compile('^[ \t]*#if[ \t]*.*')
+    badelserex = re.compile('^[ \t]*#else[ \t]*.*')
+
+    # This code is not designed to operate on huge files.  Neither is it designed to be
+    # efficient.
+
+    # ifblock state:
+    # -1 : not in an if/else block
+    #  0 : no condition satisfied yet
+    #  1 : condition satisfied
+    #  2 : condition was handled, waiting for endif
+
+    ifile = False
+    try:
+        ifile = open(inputfile, 'r')
+    except FileNotFoundError:
+        for dir in incdirs:
+            try:
+                ifile = open(dir + '/' + inputfile, 'r')
+            except FileNotFoundError:
+                pass
+            else:
+                break
+
+    if not ifile:
+        print("Error:  Cannot open file " + inputfile + " for reading.\n", file=sys.stderr)
+        return
+
+    ccblock = -1
+    ifblock = -1
+    ifstack = []
+    lineno = 0
+
+    filetext = ifile.readlines()
+    lastline = []
+
+    for line in filetext:
+        lineno += 1
+
+        # C-style comments override everything else
+        if ccomm:
+            if ccblock == -1:
+                pmatch = ccstartrex.search(line)
+                if pmatch:
+                    ematch = ccendrex.search(line[pmatch.end(0):])
+                    if ematch:
+                        line = line[0:pmatch.start(0)] + line[pmatch.end(0) + ematch.end(0):]
+                    else:
+                        line = line[0:pmatch.start(0)]
+                        ccblock = 1
+            elif ccblock == 1:
+                ematch = ccendrex.search(line)
+                if ematch:
+                    line = line[ematch.end(0)+2:]
+                    ccblock = -1
+                else:
+                    continue
+
+        # Handle continuation detected in previous line
+        if lastline:
+            # Note:  Apparently there is a character retained after the backslash,
+            # so strip the last two characters from the line.
+            line = lastline[0:-2] + line
+            lastline = []
+
+        # Continuation lines have the next highest priority.  However, this
+        # script will attempt to keep continuation lines in the body of the
+        # text and only collapse lines where continuation lines occur in
+        # a preprocessor statement.
+
+        cmatch = contrex.match(line)
+
+        # Ignore lines beginning with "###"
+        pmatch = commentrex.match(line)
+        if pmatch:
+            continue
+
+        # Handle ifdef
+        pmatch = ifdefrex.match(line)
+        if pmatch:
+            if cmatch:
+                lastline = line
+                continue
+            if ifblock != -1:
+                ifstack.append(ifblock)
+                
+            if ifblock == 1 or ifblock == -1:
+                condition = pmatch.group(1)
+                ifblock = solve_condition(condition, keys, defines, keyrex)
+            else:
+                ifblock = 2
+            continue
+
+        # Handle ifndef
+        pmatch = ifndefrex.match(line)
+        if pmatch:
+            if cmatch:
+                lastline = line
+                continue
+            if ifblock != -1:
+                ifstack.append(ifblock)
+                
+            if ifblock == 1 or ifblock == -1:
+                condition = pmatch.group(1)
+                ifblock = solve_condition(condition, keys, defines, keyrex)
+                ifblock = 1 if ifblock == 0 else 0
+            else:
+                ifblock = 2
+            continue
+
+        # Handle if
+        pmatch = ifrex.match(line)
+        if pmatch:
+            if cmatch:
+                lastline = line
+                continue
+            if ifblock != -1:
+                ifstack.append(ifblock)
+
+            if ifblock == 1 or ifblock == -1:
+                condition = pmatch.group(1)
+                ifblock = solve_condition(condition, keys, defines, keyrex)
+            else:
+                ifblock = 2
+            continue
+
+        # Handle elseif
+        pmatch = elseifrex.match(line)
+        if pmatch:
+            if cmatch:
+                lastline = line
+                continue
+            if ifblock == -1:
+               print("Error: #elseif without preceding #if at line " + str(lineno) + ".", file=sys.stderr)
+               ifblock = 0
+
+            if ifblock == 1:
+                ifblock = 2
+            elif ifblock != 2:
+                condition = pmatch.group(1)
+                ifblock = solve_condition(condition, keys, defines, keyrex)
+            continue
+
+        # Handle else
+        pmatch = elserex.match(line)
+        if pmatch:
+            if cmatch:
+                lastline = line
+                continue
+            if ifblock == -1:
+               print("Error: #else without preceding #if at line " + str(lineno) + ".", file=sys.stderr)
+               ifblock = 0
+
+            if ifblock == 1:
+                ifblock = 2
+            elif ifblock == 0:
+                ifblock = 1
+            continue
+
+        # Handle endif
+        pmatch = endifrex.match(line)
+        if pmatch:
+            if cmatch:
+                lastline = line
+                continue
+            if ifblock == -1:
+                print("Error:  #endif outside of #if block at line " + str(lineno) + " (ignored)", file=sys.stderr)
+            elif ifstack:
+                ifblock = ifstack.pop()
+            else:
+                ifblock = -1
+            continue
+                 
+        # Check for 'if' or 'else' that were not properly formed
+        pmatch = badifrex.match(line)
+        if pmatch:
+            print("Error:  Badly formed #if statement at line " + str(lineno) + " (ignored)", file=sys.stderr)
+            if ifblock != -1:
+                ifstack.append(ifblock)
+
+            if ifblock == 1 or ifblock == -1:
+                ifblock = 0
+            else:
+                ifblock = 2
+            continue
+
+        pmatch = badelserex.match(line)
+        if pmatch:
+            print("Error:  Badly formed #else statement at line " + str(lineno) + " (ignored)", file=sys.stderr)
+            ifblock = 2
+            continue
+
+        # Ignore all lines that are not satisfied by a conditional
+        if ifblock == 0 or ifblock == 2:
+            continue
+
+        # Handle include.  Note that this code does not expect or
+        # handle 'if' blocks that cross file boundaries.
+        pmatch = includerex.match(line)
+        if pmatch:
+            if cmatch:
+                lastline = line
+                continue
+            inclfile = pmatch.group(1)
+            runpp(keys, keyrex, defines, ccomm, incdirs, inclfile, ofile)
+            continue
+
+        # Handle define (with value)
+        pmatch = definerex.match(line)
+        if pmatch:
+            if cmatch:
+                lastline = line
+                continue
+            condition = pmatch.group(1)
+
+            # Additional handling of definition w/parameters: #define X(a,b,c) ..."
+            rmatch = paramrex.match(condition) 
+            if rmatch:
+                # 'condition' as a key into keyrex only needs to be unique.
+                # Use the definition word without everything in parentheses
+                condition = rmatch.group(1)
+
+                # 'pcondition' is the actual search regexp and must capture all
+                # the parameters individually for substitution
+
+                parameters = rmatch.group(2).split(',')
+
+                # Generate the regexp string to match comma-separate values
+                # Note that this is based on the cpp preprocessor, which
+                # apparently allows commas in arguments if surrounded by
+                # parentheses;  e.g., "def(a, b, (c1,c2))".  This is NOT
+                # handled.
+
+                pcondition = condition + '\('
+                for param in parameters[0:-1]:
+                    pcondition += '(.*),'
+                pcondition += '(.*)\)'
+
+                # Generate the substitution string with group substitutions
+                pvalue = pmatch.group(2)
+                idx = 1
+                for param in parameters:
+                    pvalue = pvalue.replace(param, '\g<' + str(idx) + '>')
+                    idx = idx + 1
+
+                defines[condition] = pvalue
+                keyrex[condition] = re.compile(pcondition)
+            else:
+                parameters = []
+                value = pmatch.group(2)
+                # Note:  Need to check for infinite recursion here, but it's tricky.
+                defines[condition] = value
+                keyrex[condition] = re.compile(condition)
+
+            if condition not in keys:
+                # Parameterized keys go to the front of the list
+                if parameters:
+                    keys.insert(0, condition)
+                else:
+                    keys.append(condition)
+                keys = sortkeys(keys)
+            continue
+
+        # Handle define (simple case, no value)
+        pmatch = defrex.match(line)
+        if pmatch:
+            if cmatch:
+                lastline = line
+                continue
+            condition = pmatch.group(1)
+            defines[condition] = '1'
+            keyrex[condition] = re.compile(condition)
+            if condition not in keys:
+                keys.append(condition)
+                keys = sortkeys(keys)
+            continue
+
+        # Handle undef
+        pmatch = undefrex.match(line)
+        if pmatch:
+            if cmatch:
+                lastline = line
+                continue
+            condition = pmatch.group(1)
+            if condition in keys:
+                defines.pop(condition)
+                keyrex.pop(condition)
+                keys.remove(condition)
+            continue
+
+        # Now do definition replacement on what's left (if anything)
+        # This must be done repeatedly from the top until there are no
+        # more substitutions to make.
+
+        while True:
+            origline = line
+            for keyword in keys:
+                newline = keyrex[keyword].sub(defines[keyword], line)
+                if newline != line:
+                    line = newline
+                    break
+                    
+            if line == origline:
+                break
+                
+        # Output the line
+        print(line, file=ofile, end='')
+
+    if ifblock != -1 or ifstack != []:
+        print("Error:  input file ended with an unterminated #if block.", file=sys.stderr)
+
+    if ifile != sys.stdin:
+        ifile.close()
+    return
+
+def printusage(progname):
+    print('Usage: ' + progname + ' input_file [output_file] [-options]')
+    print('   Options are:')
+    print('      -help         Print this help text.')
+    print('      -ccomm        Remove C comments in /* ... */ delimiters.')
+    print('      -D<def>       Define word <def> and set its value to 1.')
+    print('      -D<def>=<val> Define word <def> and set its value to <val>.')
+    print('      -I<dir>       Add <dir> to search path for input files.')
+    return
+
+if __name__ == '__main__':
+
+   # Parse command line for options and arguments
+    options = []
+    arguments = []
+    for item in sys.argv[1:]:
+        if item.find('-', 0) == 0:
+            options.append(item)
+        else:
+            arguments.append(item)
+
+    if len(arguments) > 0:
+        inputfile = arguments[0]
+        if len(arguments) > 1:
+            outputfile = arguments[1]
+        else:
+            outputfile = []
+    else:
+        printusage(sys.argv[0])
+        sys.exit(0)
+
+    defines = {}
+    keyrex = {}
+    keys = []
+    incdirs = []
+    ccomm = False
+    for item in options:
+        result = item.split('=')
+        if result[0] == '-help':
+            printusage(sys.argv[0])
+            sys.exit(0)
+        elif result[0] == '-ccomm':
+            ccomm = True
+        elif result[0][0:2] == '-I':
+            incdirs.append(result[0][2:])
+        elif result[0][0:2] == '-D':
+            keyword = result[0][2:]
+            try:
+                value = result[1]
+            except:
+                value = '1'
+            defines[keyword] = value
+            keyrex[keyword] = re.compile(keyword)
+            keys.append(keyword)
+            keys = sortkeys(keys)
+        else:
+            print('Bad option ' + item + ', options are -help, -ccomm, -D<def> -I<dir>\n')
+            sys.exit(1)
+
+    if outputfile:
+        ofile = open(outputfile, 'w')
+    else:
+        ofile = sys.stdout
+
+    if not ofile:
+        print("Error:  Cannot open file " + output_file + " for writing.")
+        sys.exit(1)
+
+    # Sort keys so that if any definition contains another definition, the
+    # subset word is handled last;  otherwise the subset word will get
+    # substituted, screwing up the definition names in which it occurs.
+
+    keys = sortkeys(keys)
+
+    runpp(keys, keyrex, defines, ccomm, incdirs, inputfile, ofile)
+    if ofile != sys.stdout:
+        ofile.close()
+    sys.exit(0)
diff --git a/common/remove_specify.py b/common/remove_specify.py
new file mode 100755
index 0000000..746a06f
--- /dev/null
+++ b/common/remove_specify.py
@@ -0,0 +1,94 @@
+#!/usr/bin/env python3
+#
+# Remove timing information from a verilog file, which is everything between
+# the keywords "specify" and "endspecify".
+#
+# Filter a verilog file to remove any backslash continuation lines, which
+# iverilog does not parse.  If targetroot is a directory, then find and
+# process all files in the path of targetroot.  If any file to be processed
+# is unmodified (has no backslash continuation lines), then ignore it.  If
+# any file is a symbolic link and gets modified, then remove the symbolic
+# link before overwriting with the modified file.
+#
+
+import stat
+import sys
+import os
+import re
+
+def makeuserwritable(filepath):
+    if os.path.exists(filepath):
+        st = os.stat(filepath)
+        os.chmod(filepath, st.st_mode | stat.S_IWUSR)
+
+def remove_specify(vfile, outfile):
+    modified = False
+    with open(vfile, 'r') as ifile:
+        vtext = ifile.read()
+
+    if outfile == None:
+        outfile = vfile
+
+    # Remove backslash-followed-by-newline and absorb initial whitespace.  It
+    # is unclear what initial whitespace means in this context, as the use-
+    # case that has been seen seems to work under the assumption that leading
+    # whitespace is ignored up to the amount used by the last indentation.
+
+    vlines = re.sub('\\\\\n[ \t]*', '', vtext)
+
+    specrex = re.compile('\n[ \t]*specify[ \t\n]+')
+    endspecrex = re.compile('\n[ \t]*endspecify')
+    smatch = specrex.search(vlines)
+    while smatch:
+        specstart = smatch.start()
+        specpos = smatch.end()
+        ematch = endspecrex.search(vlines[specpos:])
+        specend = ematch.end()
+        vtemp = vlines[0:specstart + 1] + vlines[specpos + specend + 1:]
+        vlines = vtemp
+        smatch = specrex.search(vlines)
+
+    if vlines != vtext:
+        # File contents have been modified, so if this file was a symbolic
+        # link, then remove it.  Otherwise, overwrite the file with the
+        # modified contents.
+        if outfile == vfile:
+            if os.path.islink(vfile):
+                os.unlink(vfile)
+        if os.path.exists(outfile):
+            makeuserwritable(outfile)
+        with open(outfile, 'w') as ofile:
+            ofile.write(vlines)
+
+    elif outfile != vfile:
+        if os.path.exists(outfile):
+            makeuserwritable(outfile)
+        with open(outfile, 'w') as ofile:
+            ofile.write(vlines)
+
+# If called as main, run remove_specify
+
+if __name__ == '__main__':
+
+    # Divide up command line into options and arguments
+    options = []
+    arguments = []
+    for item in sys.argv[1:]:
+        if item.find('-', 0) == 0:
+            options.append(item)
+        else:
+            arguments.append(item)
+
+    # Need one argument:  path to verilog netlist
+    # If two arguments, then 2nd argument is the output file.
+
+    if len(arguments) == 2:
+        netlist_path = arguments[0]
+        output_path = arguments[1]
+        remove_specify(netlist_path, output_path)
+    elif len(arguments) != 1:
+        print("Usage:  remove_spcify.py <file_path> [<output_path>]")
+    elif len(arguments) == 1:
+        netlist_path = arguments[0]
+        remove_specify(netlist_path, None)
+
diff --git a/common/soc_floorplanner.py b/common/soc_floorplanner.py
new file mode 100755
index 0000000..959a623
--- /dev/null
+++ b/common/soc_floorplanner.py
@@ -0,0 +1,2619 @@
+#!/usr/bin/env python3
+#
+#--------------------------------------------------------
+# Padframe Editor and Core Floorplanner
+#
+#--------------------------------------------------------
+# Written by Tim Edwards
+# efabless, inc.
+# April 24, 2019
+# Version 0.5
+# Based on https://github.com/YosysHQ/padring (requirement)
+# Update: May 9, 2019 to add console message window
+# Update: May 10, 2019 to incorporate core floorplanning
+# Update: Jan 31, 2020 to allow batch operation
+#--------------------------------------------------------
+
+import os
+import re
+import sys
+import glob
+import json
+import math
+import shutil
+import signal
+import select
+import subprocess
+import faulthandler
+
+import tkinter
+from tkinter import ttk
+from tkinter import filedialog
+import tksimpledialog
+from consoletext import ConsoleText
+
+# User preferences file (if it exists)
+prefsfile = '~/design/.profile/prefs.json'
+
+#------------------------------------------------------
+# Dialog for entering a pad
+#------------------------------------------------------
+
+class PadNameDialog(tksimpledialog.Dialog):
+    def body(self, master, warning=None, seed=None):
+        if warning:
+            ttk.Label(master, text=warning).grid(row = 0, columnspan = 2, sticky = 'wns')
+        ttk.Label(master, text="Enter new group name:").grid(row = 1, column = 0, sticky = 'wns')
+        self.nentry = ttk.Entry(master)
+        self.nentry.grid(row = 1, column = 1, sticky = 'ewns')
+        if seed:
+            self.nentry.insert(0, seed)
+        return self.nentry # initial focus
+
+    def apply(self):
+        return self.nentry.get()
+
+#------------------------------------------------------
+# Dialog for entering core dimensions
+#------------------------------------------------------
+
+class CoreSizeDialog(tksimpledialog.Dialog):
+    def body(self, master, warning="Chip core dimensions", seed=None):
+        if warning:
+            ttk.Label(master, text=warning).grid(row = 0, columnspan = 2, sticky = 'wns')
+        ttk.Label(master, text="Enter core width x height (microns):").grid(row = 1, column = 0, sticky = 'wns')
+        self.nentry = ttk.Entry(master)
+        self.nentry.grid(row = 1, column = 1, sticky = 'ewns')
+
+
+        if seed:
+            self.nentry.insert(0, seed)
+        return self.nentry # initial focus
+
+    def apply(self):
+        return self.nentry.get()
+
+#------------------------------------------------
+# SoC Floorplanner and Padframe Generator GUI
+#------------------------------------------------
+
+class SoCFloorplanner(ttk.Frame):
+    """Open Galaxy Pad Frame Generator."""
+
+    def __init__(self, parent = None, *args, **kwargs):
+        '''See the __init__ for Tkinter.Toplevel.'''
+        ttk.Frame.__init__(self, parent, *args[1:], **kwargs)
+        self.root = parent
+        self.init_data()
+        if args[0] == True:
+            self.do_gui = True
+            self.init_gui()
+        else:
+            self.do_gui = False
+            self.use_console = False
+
+    def on_quit(self):
+        """Exits program."""
+        quit()
+
+    def init_gui(self):
+        """Builds GUI."""
+        global prefsfile
+
+        message = []
+        fontsize = 11
+
+        # Read user preferences file, get default font size from it.
+        prefspath = os.path.expanduser(prefsfile)
+        if os.path.exists(prefspath):
+            with open(prefspath, 'r') as f:
+                self.prefs = json.load(f)
+            if 'fontsize' in self.prefs:
+                fontsize = self.prefs['fontsize']
+        else:
+            self.prefs = {}
+
+        s = ttk.Style()
+
+        available_themes = s.theme_names()
+        s.theme_use(available_themes[0])
+
+        s.configure('normal.TButton', font=('Helvetica', fontsize), border = 3, relief = 'raised')
+        s.configure('title.TLabel', font=('Helvetica', fontsize, 'bold italic'),
+                        foreground = 'brown', anchor = 'center')
+        s.configure('blue.TLabel', font=('Helvetica', fontsize), foreground = 'blue')
+        s.configure('normal.TLabel', font=('Helvetica', fontsize))
+        s.configure('normal.TCheckbutton', font=('Helvetica', fontsize))
+        s.configure('normal.TMenubutton', font=('Helvetica', fontsize))
+        s.configure('normal.TEntry', font=('Helvetica', fontsize), background='white')
+        s.configure('pad.TLabel', font=('Helvetica', fontsize), foreground = 'blue', relief = 'flat')
+        s.configure('select.TLabel', font=('Helvetica', fontsize, 'bold'), foreground = 'white',
+			background = 'blue', relief = 'flat')
+ 
+        # parent.withdraw()
+        self.root.title('Padframe Generator and Core Floorplanner')
+        self.root.option_add('*tearOff', 'FALSE')
+        self.pack(side = 'top', fill = 'both', expand = 'true')
+        self.root.protocol("WM_DELETE_WINDOW", self.on_quit)
+
+        pane = tkinter.PanedWindow(self, orient = 'vertical', sashrelief = 'groove',
+			sashwidth = 6)
+        pane.pack(side = 'top', fill = 'both', expand = 'true')
+
+        self.toppane = ttk.Frame(pane)
+        self.botpane = ttk.Frame(pane)
+
+        self.toppane.columnconfigure(0, weight = 1)
+        self.toppane.rowconfigure(0, weight = 1)
+        self.botpane.columnconfigure(0, weight = 1)
+        self.botpane.rowconfigure(0, weight = 1)
+
+        # Scrolled frame using canvas widget
+        self.pframe = tkinter.Frame(self.toppane)
+        self.pframe.grid(row = 0, column = 0, sticky = 'news')
+        self.pframe.rowconfigure(0, weight = 1)
+        self.pframe.columnconfigure(0, weight = 1)
+
+        # Add column on the left, listing all groups and the pads they belong to.
+        # This starts as just a frame to be filled.  Use a canvas to create a
+        # scrolled frame.
+
+        # The primary frame holds the canvas
+        self.canvas = tkinter.Canvas(self.pframe, background = "white")
+        self.canvas.grid(row = 0, column = 0, sticky = 'news')
+
+        # Add Y scrollbar to pad list window
+        xscrollbar = ttk.Scrollbar(self.pframe, orient = 'horizontal')
+        xscrollbar.grid(row = 1, column = 0, sticky = 'news')
+        yscrollbar = ttk.Scrollbar(self.pframe, orient = 'vertical')
+        yscrollbar.grid(row = 0, column = 1, sticky = 'news')
+
+        self.canvas.config(xscrollcommand = xscrollbar.set)
+        xscrollbar.config(command = self.canvas.xview)
+        self.canvas.config(yscrollcommand = yscrollbar.set)
+        yscrollbar.config(command = self.canvas.yview)
+
+        self.canvas.bind("<Button-4>", self.on_scrollwheel)
+        self.canvas.bind("<Button-5>", self.on_scrollwheel)
+
+        # Configure callback
+        self.canvas.bind("<Configure>", self.frame_configure)
+
+        # Add a text window to capture output.  Redirect print statements to it.
+        self.console = ttk.Frame(self.botpane)
+        self.console.grid(column = 0, row = 0, sticky = "news")
+        self.text_box = ConsoleText(self.console, wrap='word', height = 4)
+        self.text_box.pack(side='left', fill='both', expand='true')
+        console_scrollbar = ttk.Scrollbar(self.console)
+        console_scrollbar.pack(side='right', fill='y')
+        # Attach console to scrollbar
+        self.text_box.config(yscrollcommand = console_scrollbar.set)
+        console_scrollbar.config(command = self.text_box.yview)
+
+        # Add the bottom bar with buttons
+        self.bbar = ttk.Frame(self.botpane)
+        self.bbar.grid(column = 0, row = 1, sticky = "news")
+
+        self.bbar.import_button = ttk.Button(self.bbar, text='Import',
+		command=self.vlogimport, style='normal.TButton')
+        self.bbar.import_button.grid(column=0, row=0, padx = 5)
+
+        self.bbar.generate_button = ttk.Button(self.bbar, text='Generate',
+		command=self.generate, style='normal.TButton')
+        self.bbar.generate_button.grid(column=1, row=0, padx = 5)
+
+        self.bbar.save_button = ttk.Button(self.bbar, text='Save',
+		command=self.save, style='normal.TButton')
+        self.bbar.save_button.grid(column=2, row=0, padx = 5)
+
+        self.bbar.cancel_button = ttk.Button(self.bbar, text='Quit',
+		command=self.on_quit, style='normal.TButton')
+        self.bbar.cancel_button.grid(column=3, row=0, padx = 5)
+
+        pane.add(self.toppane)
+        pane.add(self.botpane)
+        pane.paneconfig(self.toppane, stretch='first')
+
+    def init_data(self):
+
+        self.vlogpads = []
+        self.corecells = []
+        self.Npads = []
+        self.Spads = []
+        self.Epads = []
+        self.Wpads = []
+        self.NEpad = []
+        self.NWpad = []
+        self.SEpad = []
+        self.SWpad = []
+        self.coregroup = []
+
+        self.celldefs = []
+        self.coredefs = []
+        self.selected = []
+        self.ioleflibs = []
+        self.llx = 0
+        self.lly = 0
+        self.urx = 0
+        self.ury = 0
+
+        self.event_data = {}
+        self.event_data['x0'] = 0
+        self.event_data['y0'] = 0
+        self.event_data['x'] = 0
+        self.event_data['y'] = 0
+        self.event_data['tag'] = None
+        self.scale = 1.0
+        self.margin = 10
+        self.pad_rotation = 0
+
+        self.init_messages = []
+        self.stdout = None
+        self.stderr = None
+
+        self.keep_cfg = False
+        self.ef_format = False
+        self.use_console = False
+
+    def init_padframe(self):
+        self.set_project()
+        self.vlogimport()
+        self.readplacement(precheck=True)
+        self.resolve()
+        self.generate(0)
+
+    # Local routines for handling printing to the text console
+
+    def print(self, message, file=None, end='\n', flush=True):
+        if not file:
+            if not self.use_console:
+                file = sys.stdout
+            else:
+                file = ConsoleText.StdoutRedirector(self.text_box)
+        if self.stdout:
+            print(message, file=file, end=end)
+            if flush:
+                self.stdout.flush()
+                self.update_idletasks()
+        else:
+            self.init_messages.append(message)
+
+    def text_to_console(self):
+        # Redirect stdout and stderr to the console as the last thing to do. . .
+        # Otherwise errors in the GUI get sucked into the void.
+
+        self.stdout = sys.stdout
+        self.stderr = sys.stderr
+        if self.use_console:
+            sys.stdout = ConsoleText.StdoutRedirector(self.text_box)
+            sys.stderr = ConsoleText.StderrRedirector(self.text_box)
+
+        if len(self.init_messages) > 0:
+            for message in self.init_messages:
+                self.print(message)
+            self.init_messages = []
+
+    # Set the project name(s).  This is the name of the top-level verilog.
+    # The standard protocol is that the project directory contains a file
+    # project.json that defines a name 'ip-name' that is the same as the
+    # layout name, the verilog module name, etc.  
+
+    def set_project(self):
+        # Check pwd
+        pwdname = self.projectpath if self.projectpath else os.getcwd()
+        
+        subdir = os.path.split(pwdname)[1]
+        if subdir == 'mag' or subdir == 'verilog':
+            projectpath = os.path.split(pwdname)[0]
+        else:
+            projectpath = pwdname
+
+        projectroot = os.path.split(projectpath)[0]
+        projectdirname = os.path.split(projectpath)[1]
+
+        # Check for efabless format.  This is probably more complicated than
+        # it deserves to be.  Option -ef_format is the best way to specify
+        # efabless format.  However, if it is not specified, then check the
+        # technology PDK directory and the project directory for the tell-tale
+        # ".ef-config" (efabless format) directory vs. ".config" (not efabless
+        # format).
+
+        if not self.ef_format:
+            if os.path.exists(projectpath + '/.ef-config'):
+                self.ef_format = True
+            elif self.techpath:
+                if os.path.exists(self.techpath + '/.ef-config'):
+                    self.ef_format = True
+        else:
+            # Do a quick consistency check.  Honor the -ef_format option but warn if
+            # there is an apparent inconsistency.
+            if os.path.exists(projectpath + '/.config'):
+                self.print('Warning:  -ef_format used in apparently non-efabless setup.')
+            elif self.techpath:
+                if os.path.exists(self.techpath + '/.config'):
+                    self.print('Warning:  -ef_format used in apparently non-efabless setup.')
+
+        # Check for project.json
+
+        jsonname = None
+        if os.path.isfile(projectpath + '/project.json'):
+            jsonname = projectpath + '/project.json'
+        elif os.path.isfile(projectroot + '/' + projectdirname + '.json'):
+            jsonname = projectroot + '/' + projectdirname + '.json'
+        if os.path.isfile(projectroot + '/project.json'):
+            # Just in case this was started from some other subdirectory
+            projectpath = projectroot
+            jsonname = projectroot + '/project.json'
+
+        if jsonname:
+            self.print('Reading project JSON file ' + jsonname)
+            with open(jsonname, 'r') as ifile:
+                topdata = json.load(ifile)
+                if 'data-sheet' in topdata:
+                    dsheet = topdata['data-sheet']
+                    if 'ip-name' in dsheet:
+                        self.project = dsheet['ip-name']
+                        self.projectpath = projectpath
+        else:
+            self.print('No project JSON file; using directory name as the project name.')
+            self.project = os.path.split(projectpath)[1]
+            self.projectpath = projectpath
+
+        self.print('Project name is ' + self.project + ' (' + self.projectpath + ')')
+
+    # Functions for drag-and-drop capability
+    def add_draggable(self, tag):
+        self.canvas.tag_bind(tag, '<ButtonPress-1>', self.on_button_press)
+        self.canvas.tag_bind(tag, '<ButtonRelease-1>', self.on_button_release)
+        self.canvas.tag_bind(tag, '<B1-Motion>', self.on_button_motion)
+        self.canvas.tag_bind(tag, '<ButtonPress-2>', self.on_button2_press)
+        self.canvas.tag_bind(tag, '<ButtonPress-3>', self.on_button3_press)
+
+    def on_button_press(self, event):
+        '''Begining drag of an object'''
+        # Find the closest item, then record its tag.
+        locx = event.x + self.canvas.canvasx(0)
+        locy = event.y + self.canvas.canvasy(0)
+        item = self.canvas.find_closest(locx, locy)[0]
+        self.event_data['tag'] = self.canvas.gettags(item)[0]
+        self.event_data['x0'] = event.x
+        self.event_data['y0'] = event.y
+        self.event_data['x'] = event.x
+        self.event_data['y'] = event.y
+
+    def on_button2_press(self, event):
+        '''Flip an object (excluding corners)'''
+        locx = event.x + self.canvas.canvasx(0)
+        locy = event.y + self.canvas.canvasy(0)
+        item = self.canvas.find_closest(locx, locy)[0]
+        tag = self.canvas.gettags(item)[0]
+
+        try:
+            corecell = next(item for item in self.coregroup if item['name'] == tag)
+        except:
+            try:
+                pad = next(item for item in self.Npads if item['name'] == tag)
+            except:
+                pad = None
+            if not pad:
+                try:
+                    pad = next(item for item in self.Epads if item['name'] == tag)
+                except:
+                    pad = None
+            if not pad:
+                try:
+                    pad = next(item for item in self.Spads if item['name'] == tag)
+                except:
+                    pad = None
+            if not pad:
+                try:
+                    pad = next(item for item in self.Wpads if item['name'] == tag)
+                except:
+                    pad = None
+            if not pad:
+                self.print('Error: Object cannot be flipped.')
+            else:
+                # Flip the pad (in the only way meaningful for the pad).
+                orient = pad['o']
+                if orient == 'N':
+                    pad['o'] = 'FN'
+                elif orient == 'E':
+                    pad['o'] = 'FW'
+                elif orient == 'S':
+                    pad['o'] = 'FS'
+                elif orient == 'W':
+                    pad['o'] = 'FE'
+                elif orient == 'FN':
+                    pad['o'] = 'N'
+                elif orient == 'FE':
+                    pad['o'] = 'W'
+                elif orient == 'FS':
+                    pad['o'] = 'S'
+                elif orient == 'FW':
+                    pad['o'] = 'E'
+        else:
+            # Flip the cell.  Use the DEF meaning of flip, which is to
+            # add or subtract 'F' from the orientation.
+            orient = corecell['o']
+            if not 'F' in orient:
+                corecell['o'] = 'F' + orient
+            else:
+                corecell['o'] = orient[1:]
+
+        # Redraw
+        self.populate(0)
+
+    def on_button3_press(self, event):
+        '''Rotate a core object (no pads) '''
+        locx = event.x + self.canvas.canvasx(0)
+        locy = event.y + self.canvas.canvasy(0)
+        item = self.canvas.find_closest(locx, locy)[0]
+        tag = self.canvas.gettags(item)[0]
+
+        try:
+            corecell = next(item for item in self.coregroup if item['name'] == tag)
+        except:
+            self.print('Error: Object cannot be rotated.')
+        else:
+            # Modify its orientation
+            orient = corecell['o']
+            if orient == 'N':
+                corecell['o'] = 'E'
+            elif orient == 'E':
+                corecell['o'] = 'S'
+            elif orient == 'S':
+                corecell['o'] = 'W'
+            elif orient == 'W':
+                corecell['o'] = 'N'
+            elif orient == 'FN':
+                corecell['o'] = 'FW'
+            elif orient == 'FW':
+                corecell['o'] = 'FS'
+            elif orient == 'FS':
+                corecell['o'] = 'FE'
+            elif orient == 'FE':
+                corecell['o'] = 'FN'
+
+            # rewrite the core DEF file
+            self.write_core_def()
+
+        # Redraw
+        self.populate(0)
+
+    def on_button_motion(self, event):
+        '''Handle dragging of an object'''
+        # compute how much the mouse has moved
+        delta_x = event.x - self.event_data['x']
+        delta_y = event.y - self.event_data['y']
+        # move the object the appropriate amount
+        self.canvas.move(self.event_data['tag'], delta_x, delta_y)
+        # record the new position
+        self.event_data['x'] = event.x
+        self.event_data['y'] = event.y
+
+    def on_button_release(self, event):
+        '''End drag of an object'''
+
+        # Find the pad associated with the tag and update its position information
+        tag = self.event_data['tag']
+
+        # Collect pads in clockwise order.  Note that E and S rows are not clockwise
+        allpads = []
+        allpads.extend(self.Npads)
+        allpads.extend(self.NEpad)
+        allpads.extend(reversed(self.Epads))
+        allpads.extend(self.SEpad)
+        allpads.extend(reversed(self.Spads))
+        allpads.extend(self.SWpad)
+        allpads.extend(self.Wpads)
+        allpads.extend(self.NWpad)
+
+        # Create a list of row references (also in clockwise order, but no reversing)
+        padrows = [self.Npads, self.NEpad, self.Epads, self.SEpad, self.Spads, self.SWpad, self.Wpads, self.NWpad]
+
+        # Record the row or corner where this pad was located before the move
+        for row in padrows:
+            try:
+                pad = next(item for item in row if item['name'] == tag)
+            except:
+                pass
+            else:
+                padrow = row
+                break
+
+        # Currently there is no procedure to move a pad out of the corner
+        # position;  corners are fixed by definition.
+        if padrow == self.NEpad or padrow == self.SEpad or padrow == self.SWpad or padrow == self.NWpad:
+            # Easier to run generate() than to put the pad back. . .
+            self.generate(0)
+            return
+
+        # Find the original center point of the pad being moved
+
+        padllx = pad['x']
+        padlly = pad['y']
+        if pad['o'] == 'N' or pad['o'] == 'S':
+            padurx = padllx + pad['width']
+            padury = padlly + pad['height']
+        else:
+            padurx = padllx + pad['height']
+            padury = padlly + pad['width']
+        padcx = (padllx + padurx) / 2
+        padcy = (padlly + padury) / 2
+
+        # Add distance from drag information (note that drag position in y
+        # is negative relative to the chip dimensions)
+        padcx += (self.event_data['x'] - self.event_data['x0']) / self.scale
+        padcy -= (self.event_data['y'] - self.event_data['y0']) / self.scale
+
+        # reset the drag information
+        self.event_data['tag'] = None
+        self.event_data['x'] = 0
+        self.event_data['y'] = 0
+        self.event_data['x0'] = 0
+        self.event_data['y0'] = 0
+
+        # Find the distance from the pad to all other pads, and get the two
+        # closest entries.
+
+        wwidth = self.urx - self.llx
+        dist0 = wwidth
+        dist1 = wwidth
+        pad0 = None
+        pad1 = None
+          
+        for npad in allpads:
+            if npad == pad:
+                continue
+
+            npadllx = npad['x']
+            npadlly = npad['y']
+            if npad['o'] == 'N' or npad['o'] == 'S':
+                npadurx = npadllx + npad['width']
+                npadury = npadlly + npad['height']
+            else:
+                npadurx = npadllx + npad['height']
+                npadury = npadlly + npad['width']
+            npadcx = (npadllx + npadurx) / 2
+            npadcy = (npadlly + npadury) / 2
+
+            deltx = npadcx - padcx
+            delty = npadcy - padcy
+            pdist = math.sqrt(deltx * deltx + delty * delty)
+            if pdist < dist0:
+                dist1 = dist0
+                pad1 = pad0
+                dist0 = pdist
+                pad0 = npad
+
+            elif pdist < dist1:
+                dist1 = pdist
+                pad1 = npad
+
+        # Diagnostic
+        # self.print('Two closest pads to pad ' + pad['name'] + ' (' + pad['cell'] + '): ')
+        # self.print(pad0['name'] + ' (' + pad0['cell'] + ') dist = ' + str(dist0))
+        # self.print(pad1['name'] + ' (' + pad1['cell'] + ') dist = ' + str(dist1))
+
+        # Record the row or corner where these pads are
+        for row in padrows:
+            try:
+                testpad = next(item for item in row if item['name'] == pad0['name'])
+            except:
+                pass
+            else:
+                padrow0 = row
+                break
+
+        for row in padrows:
+            try:
+                testpad = next(item for item in row if item['name'] == pad1['name'])
+            except:
+                pass
+            else:
+                padrow1 = row
+                break
+
+        # Remove pad from its own row
+        padrow.remove(pad)
+
+        # Insert pad into new row.  Watch for wraparound from the last entry to the first
+        padidx0 = allpads.index(pad0)
+        padidx1 = allpads.index(pad1)
+        if padidx0 == 0 and padidx1 > 2:
+            padidx1 = -1
+
+        if padidx1 > padidx0:
+            padafter = pad1
+            rowafter = padrow1
+            padbefore = pad0
+            rowbefore = padrow0
+        else:
+            padafter = pad0
+            rowafter = padrow0
+            padbefore = pad1
+            rowbefore = padrow1
+
+        # Do not replace corner positions (? may be necessary ?)
+        if rowafter == self.NWpad:
+            self.Wpads.append(pad)
+        elif rowafter == self.NWpad:
+            self.Npads.append(pad)
+        elif rowafter == self.SEpad:
+            self.Epads.insert(0, pad)
+        elif rowafter == self.SWpad:
+            self.Spads.insert(0, pad)
+        elif rowafter == self.Wpads or rowafter == self.Npads:
+            idx = rowafter.index(padafter)
+            rowafter.insert(idx, pad)
+        elif rowbefore == self.NEpad:
+            self.Epads.append(pad)
+        elif rowbefore == self.SEpad:
+            self.Spads.append(pad)
+        else:
+            # rows E and S are ordered counterclockwise
+            idx = rowbefore.index(padbefore)
+            rowbefore.insert(idx, pad)
+
+        # Re-run padring
+        self.generate(0)
+
+    def on_scrollwheel(self, event):
+        if event.num == 4:
+            zoomval = 1.1;
+        elif event.num == 5:
+            zoomval = 0.9;
+        else:
+            zoomval = 1.0;
+
+        self.scale *= zoomval
+        self.canvas.scale('all', -15 * zoomval, -15 * zoomval, zoomval, zoomval)
+        self.event_data['x'] *= zoomval
+        self.event_data['y'] *= zoomval
+        self.event_data['x0'] *= zoomval
+        self.event_data['y0'] *= zoomval
+        self.frame_configure(event)
+
+    # Callback functions similar to the pad event callbacks above, but for
+    # core cells.  Unlike pad cells, core cells can be rotated and flipped
+    # arbitrarily, and they do not force a recomputation of the padframe
+    # unless their position forces the padframe to expand
+
+    def add_core_draggable(self, tag):
+        self.canvas.tag_bind(tag, '<ButtonPress-1>', self.on_button_press)
+        self.canvas.tag_bind(tag, '<ButtonRelease-1>', self.core_on_button_release)
+        self.canvas.tag_bind(tag, '<B1-Motion>', self.on_button_motion)
+        self.canvas.tag_bind(tag, '<ButtonPress-2>', self.on_button2_press)
+        self.canvas.tag_bind(tag, '<ButtonPress-3>', self.on_button3_press)
+
+    def core_on_button_release(self, event):
+        '''End drag of a core cell'''
+
+        # Find the pad associated with the tag and update its position information
+        tag = self.event_data['tag']
+
+        try:
+            corecell = next(item for item in self.coregroup if item['name'] == tag)
+        except:
+            self.print('Error: cell ' + item['name'] + ' is not in coregroup!')
+        else:  
+            # Modify its position values
+            corex = corecell['x']
+            corey = corecell['y']
+
+            # Add distance from drag information (note that drag position in y
+            # is negative relative to the chip dimensions)
+            deltax = (self.event_data['x'] - self.event_data['x0']) / self.scale
+            deltay = (self.event_data['y'] - self.event_data['y0']) / self.scale
+
+            corecell['x'] = corex + deltax
+            corecell['y'] = corey - deltay
+
+            # rewrite the core DEF file
+            self.write_core_def()
+
+        # reset the drag information
+        self.event_data['tag'] = None
+        self.event_data['x'] = 0
+        self.event_data['y'] = 0
+        self.event_data['x0'] = 0
+        self.event_data['y0'] = 0
+
+    # Critically needed or else frame does not resize to scrollbars!
+    def grid_configure(self, padx, pady):
+        pass
+
+    # Redraw the chip frame view in response to changes in the pad list
+    def redraw_frame(self):
+        self.canvas.coords('boundary', self.llx, self.urx, self.lly, self.ury)
+   
+    # Update the canvas scrollregion to incorporate all the interior windows
+    def frame_configure(self, event):
+        if self.do_gui == False:
+            return
+        self.update_idletasks()
+        bbox = self.canvas.bbox("all")
+        try:
+            newbbox = (-15, -15, bbox[2] + 15, bbox[3] + 15)
+        except:
+            pass
+        else:
+            self.canvas.configure(scrollregion = newbbox)
+
+    # Fill the GUI entries with resident data
+    def populate(self, level):
+        if self.do_gui == False:
+            return
+
+        if level > 1:
+            self.print('Recursion error:  Returning now.')
+            return
+
+        self.print('Populating floorplan view.')
+
+        # Remove all entries from the canvas
+        self.canvas.delete('all')
+
+        allpads = self.Npads + self.NEpad + self.Epads + self.SEpad + self.Spads + self.SWpad + self.Wpads + self.NWpad
+
+        notfoundlist = []
+
+        for pad in allpads:
+            if 'x' not in pad:
+                self.print('Error:  Pad ' + pad['name'] + ' has no placement information.')
+                continue
+            llx = int(pad['x'])
+            lly = int(pad['y'])
+            pado = pad['o']
+            try:
+                padcell = next(item for item in self.celldefs if item['name'] == pad['cell'])
+            except:
+                # This should not happen (failsafe)
+                if pad['cell'] not in notfoundlist:
+                    self.print('Warning:  there is no cell named ' + pad['cell'] + ' in the libraries.')
+                notfoundlist.append(pad['cell'])
+                continue
+            padw = padcell['width']
+            padh = padcell['height']
+            if 'N' in pado or 'S' in pado:
+                urx = int(llx + padw)
+                ury = int(lly + padh)
+            else:
+                urx = int(llx + padh)
+                ury = int(lly + padw)
+            
+            pad['llx'] = llx
+            pad['lly'] = lly
+            pad['urx'] = urx
+            pad['ury'] = ury
+
+        # Note that the DEF coordinate system is reversed in Y from the canvas. . .
+
+        height = self.ury - self.lly
+        for pad in allpads:
+
+            llx = pad['llx']
+            lly = height - pad['lly']
+            urx = pad['urx']
+            ury = height - pad['ury']
+
+            tag_id = pad['name']
+            if 'subclass' in pad:
+                if pad['subclass'] == 'POWER':
+                    pad_color = 'orange2'
+                elif pad['subclass'] == 'INOUT':
+                    pad_color = 'yellow'
+                elif pad['subclass'] == 'OUTPUT':
+                    pad_color = 'powder blue'
+                elif pad['subclass'] == 'INPUT':
+                    pad_color = 'goldenrod1'
+                elif pad['subclass'] == 'SPACER':
+                    pad_color = 'green yellow'
+                elif pad['class'] == 'ENDCAP':
+                    pad_color = 'green yellow'
+                elif pad['subclass'] == '' or pad['class'] == ';':
+                    pad_color = 'khaki1'
+                else:
+                    self.print('Unhandled pad class ' + pad['class'])
+                    pad_color = 'gray'
+            else:
+                 pad_color = 'gray'
+
+            sllx = self.scale * llx
+            slly = self.scale * lly
+            surx = self.scale * urx
+            sury = self.scale * ury
+
+            self.canvas.create_rectangle((sllx, slly), (surx, sury), fill=pad_color, tags=[tag_id])
+            cx = (sllx + surx) / 2
+            cy = (slly + sury) / 2
+
+            s = 10 if pad['width'] >= 10 else pad['width']
+            if pad['height'] < s:
+                s = pad['height']
+
+            # Create an indicator line at the bottom left corner of the cell
+            if pad['o'] == 'N':
+                allx = sllx
+                ally = slly - s
+                aurx = sllx + s
+                aury = slly
+            elif pad['o'] == 'E':
+                allx = sllx
+                ally = sury + s
+                aurx = sllx + s
+                aury = sury
+            elif pad['o'] == 'S':
+                allx = surx
+                ally = sury + s
+                aurx = surx - s
+                aury = sury
+            elif pad['o'] == 'W':
+                allx = surx
+                ally = slly - s
+                aurx = surx - s
+                aury = slly
+            elif pad['o'] == 'FN':
+                allx = surx
+                ally = slly - s
+                aurx = surx - s
+                aury = slly
+            elif pad['o'] == 'FE':
+                allx = surx
+                ally = sury + s
+                aurx = surx - s
+                aury = sury
+            elif pad['o'] == 'FS':
+                allx = sllx
+                ally = sury + s
+                aurx = sllx + s
+                aury = sury
+            elif pad['o'] == 'FW':
+                allx = sllx
+                ally = slly - s
+                aurx = sllx + s
+                aury = slly
+            self.canvas.create_line((allx, ally), (aurx, aury), tags=[tag_id])
+ 
+            # Rotate text on top and bottom rows if the tkinter version allows it.
+            if tkinter.TclVersion >= 8.6:
+                if pad['o'] == 'N' or pad['o'] == 'S':
+                    angle = 90
+                else:
+                    angle = 0
+                self.canvas.create_text((cx, cy), text=pad['name'], angle=angle, tags=[tag_id])
+            else:
+                self.canvas.create_text((cx, cy), text=pad['name'], tags=[tag_id])
+
+            # Make the pad draggable
+            self.add_draggable(tag_id)
+
+        # Now add the core cells
+        for cell in self.coregroup:
+            if 'x' not in cell:
+                self.print('Error:  Core cell ' + cell['name'] + ' has no placement information.')
+                continue
+            # else:
+            #     self.print('Diagnostic:  Creating object for core cell ' + cell['name'])
+            llx = int(cell['x'])
+            lly = int(cell['y'])
+            cello = cell['o']
+            try:
+                corecell = next(item for item in self.coredefs if item['name'] == cell['cell'])
+            except:
+                # This should not happen (failsafe)
+                if cell['cell'] not in notfoundlist:
+                    self.print('Warning:  there is no cell named ' + cell['cell'] + ' in the libraries.')
+                notfoundlist.append(cell['cell'])
+                continue
+            cellw = corecell['width']
+            cellh = corecell['height']
+            if 'N' in cello or 'S' in cello:
+                urx = int(llx + cellw)
+                ury = int(lly + cellh)
+            else:
+                urx = int(llx + cellh)
+                ury = int(lly + cellw)
+                print('NOTE: cell ' + corecell['name'] + ' is rotated, w = ' + str(urx - llx) + '; h = ' + str(ury - lly))
+
+            cell['llx'] = llx
+            cell['lly'] = lly
+            cell['urx'] = urx
+            cell['ury'] = ury
+
+        # Watch for out-of-window position in core cells.
+        corellx = self.llx
+        corelly = self.lly
+        coreurx = self.urx
+        coreury = self.ury
+
+        for cell in self.coregroup:
+
+            if 'llx' not in cell:
+                # Error message for this was handled above
+                continue
+
+            llx = cell['llx']
+            lly = height - cell['lly']
+            urx = cell['urx']
+            ury = height - cell['ury']
+
+            # Check for out-of-window cell
+            if llx < corellx:
+                corellx = llx
+            if lly < corelly:
+                corelly = lly
+            if urx > coreurx:
+                coreurx = urx
+            if ury > coreury:
+                coreury = ury
+
+            tag_id = cell['name']
+            cell_color = 'gray40'
+
+            sllx = self.scale * llx
+            slly = self.scale * lly
+            surx = self.scale * urx
+            sury = self.scale * ury
+
+            self.canvas.create_rectangle((sllx, slly), (surx, sury), fill=cell_color, tags=[tag_id])
+            cx = (sllx + surx) / 2
+            cy = (slly + sury) / 2
+
+            s = 10 if cell['width'] >= 10 else cell['width']
+            if cell['height'] < s:
+                s = cell['height']
+
+            # Create an indicator line at the bottom left corner of the cell
+            if cell['o'] == 'N':
+                allx = sllx
+                ally = slly - s
+                aurx = sllx + s
+                aury = slly
+            elif cell['o'] == 'E':
+                allx = sllx
+                ally = sury + s
+                aurx = sllx + s
+                aury = sury
+            elif cell['o'] == 'S':
+                allx = surx
+                ally = sury + s
+                aurx = surx - s
+                aury = sury
+            elif cell['o'] == 'W':
+                allx = surx
+                ally = slly - s
+                aurx = surx - s
+                aury = slly
+            elif cell['o'] == 'FN':
+                allx = surx
+                ally = slly - s
+                aurx = surx - s
+                aury = slly
+            elif cell['o'] == 'FE':
+                allx = surx
+                ally = sury + s
+                aurx = surx - s
+                aury = sury
+            elif cell['o'] == 'FS':
+                allx = sllx
+                ally = sury + s
+                aurx = sllx + s
+                aury = sury
+            elif cell['o'] == 'FW':
+                allx = sllx
+                ally = slly - s
+                aurx = sllx + s
+                aury = slly
+            self.canvas.create_line((allx, ally), (aurx, aury), tags=[tag_id])
+ 
+            # self.print('Created entry for cell ' + cell['name'] + ' at {0:g}, {1:g}'.format(cx, cy))
+ 
+            # Rotate text on top and bottom rows if the tkinter version allows it.
+            if tkinter.TclVersion >= 8.6:
+                if 'N' in cell['o'] or 'S' in cell['o']:
+                    angle = 90
+                else:
+                    angle = 0
+                self.canvas.create_text((cx, cy), text=cell['name'], angle=angle, tags=[tag_id])
+            else:
+                self.canvas.create_text((cx, cy), text=cell['name'], tags=[tag_id])
+
+            # Make the core cell draggable
+            self.add_core_draggable(tag_id)
+
+        # Is there a boundary size defined?
+        if self.urx > self.llx and self.ury > self.lly:
+            self.create_boundary()
+
+        # Did the core extend into negative X or Y?  If so, adjust all canvas
+        # coordinates to fit in the window, or else objects cannot be reached
+        # even by zooming out (since zooming is pinned on the top corner).
+
+        offsetx = 0
+        offsety = 0
+
+        # NOTE:  Probably want to check if the core exceeds the inner
+        # dimension of the pad ring, not the outer (to check and to do).
+
+        if corellx < self.llx:
+            offsetx = self.llx - corellx 
+        if corelly < self.lly:
+            offsety = self.lly - corelly
+        if offsetx > 0 or offsety > 0:
+            self.canvas.move("all", offsetx, offsety)
+            # An offset implies that the chip is core limited, and the
+            # padframe requires additional space.  This can be accomplished
+            # simply by running "Generate".  NOTE:  Since generate() calls
+            # populate(), be VERY SURE that this does not infinitely recurse!
+            self.generate(level)
+
+    # Generate a DEF file of the core area
+
+    def write_core_def(self):
+        self.print('Writing core placementment information in DEF file "core.def".')
+
+        mag_path = self.projectpath + '/mag'
+
+        # The core cells must always clear the I/O pads on the left and
+        # bottom (with the ad-hoc margin of self.margin).  If core cells have
+        # been moved to the left or down past the padframe edge, then the
+        # entire core needs to be repositioned.
+
+        # To be done:  draw a boundary around the core, let the edges of that
+        # boundary be draggable, and let the difference between the boundary
+        # and the core area define the margin.
+
+        if self.SWpad != []:
+            corellx = self.SWpad[0]['x'] + self.SWpad[0]['width'] + self.margin
+            corelly = self.SWpad[0]['y'] + self.SWpad[0]['height'] + self.margin
+        else:
+            corellx = self.Wpads[0]['x'] + self.Wpads[0]['height'] + self.margin
+            corelly = self.Spads[0]['x'] + self.Spads[0]['height'] + self.margin
+
+        offsetx = 0
+        offsety = 0
+        for corecell in self.coregroup:
+            if corecell['x'] < corellx:
+                if corellx - corecell['x'] > offsetx:
+                    offsetx = corellx - corecell['x']
+            if corecell['y'] < corelly:
+                if corelly - corecell['y'] > offsety:
+                    offsety = corelly - corecell['y']
+        if offsetx > 0 or offsety > 0:
+            for corecell in self.coregroup:
+                corecell['x'] += offsetx
+                corecell['y'] += offsety
+
+        # Now write the core DEF file
+
+        with open(mag_path + '/core.def', 'w') as ofile:
+            print('DESIGN CORE ;', file=ofile)
+            print('UNITS DISTANCE MICRONS 1000 ;', file=ofile)
+            print('COMPONENTS {0:d} ;'.format(len(self.coregroup)), file=ofile)
+            for corecell in self.coregroup:
+                print('  - ' + corecell['name'] + ' ' + corecell['cell'], file=ofile)
+                print('    + PLACED ( {0:d} {1:d} ) {2:s} ;'.format(int(corecell['x'] * 1000), int(corecell['y'] * 1000), corecell['o']), file=ofile)
+            print ('END COMPONENTS', file=ofile)
+            print ('END DESIGN', file=ofile)
+
+    # Create the chip boundary area
+
+    def create_boundary(self):
+        scale = self.scale
+        llx = (self.llx - 10) * scale
+        lly = (self.lly - 10) * scale
+        urx = (self.urx + 10) * scale
+        ury = (self.ury + 10) * scale
+
+        pad_color = 'plum1'
+        tag_id = 'boundary'
+        self.canvas.create_rectangle((llx, lly), (urx, ury), outline=pad_color, width=2, tags=[tag_id])
+        # Add text to the middle representing the chip and core areas
+        cx = ((self.llx + self.urx) / 2) * scale
+        cy = ((self.lly + self.ury) / 2) * scale
+        width = self.urx - self.llx
+        height = self.ury - self.lly
+        areatext = 'Chip dimensions (um): {0:g} x {1:g}'.format(width, height)
+        tag_id = 'chiparea' 
+        self.canvas.create_text((cx, cy), text=areatext, tags=[tag_id])
+
+    # Rotate orientation according to self.pad_rotation. 
+
+    def rotate_orientation(self, orient_in):
+        orient_v = ['N', 'E', 'S', 'W', 'N', 'E', 'S', 'W']
+        idxadd = int(self.pad_rotation / 90)
+        idx = orient_v.index(orient_in)
+        return orient_v[idx + idxadd]
+
+    # Read a list of cell macros (name, size, class) from a LEF library
+
+    def read_lef_macros(self, libpath, libname = None, libtype = 'iolib'):
+        if libtype == 'iolib':
+            libtext = 'I/O '
+        elif libtype == 'celllib':
+            libtext = 'core '
+        else:
+            libtext = ''
+
+        macros = []
+
+        if libname:
+            if os.path.splitext(libname)[1] == '':
+                libname += '.lef'
+            leffiles = glob.glob(libpath + '/' + libname)
+        else:
+            leffiles = glob.glob(libpath + '/*.lef')
+        if leffiles == []:
+            if libname:
+                self.print('WARNING:  No file ' + libpath + '/' + libname + '.lef')
+            else:
+                self.print('WARNING:  No files ' + libpath + '/*.lef')
+        for leffile in leffiles:
+            libpath = os.path.split(leffile)[0]
+            libname = os.path.split(libpath)[1]
+            self.print('Reading LEF ' + libtext + 'library ' + leffile)
+            with open(leffile, 'r') as ifile:
+                ilines = ifile.read().splitlines()
+                in_macro = False
+                for iline in ilines:
+                    iparse = iline.split()
+                    if iparse == []:
+                        continue
+                    elif iparse[0] == 'MACRO':
+                        in_macro = True
+                        newmacro = {}
+                        newmacro['name'] = iparse[1]
+                        newmacro[libtype] = leffile
+                        macros.append(newmacro)
+                    elif in_macro:
+                        if iparse[0] == 'END':
+                            if len(iparse) > 1 and iparse[1] == newmacro['name']:
+                                in_macro = False
+                        elif iparse[0] == 'CLASS':
+                            newmacro['class'] = iparse[1]
+                            if len(iparse) > 2:
+                                newmacro['subclass'] = iparse[2]
+
+                                # Use the 'ENDCAP' class to identify pad rotations
+                                # other than BOTTOMLEFT.  This is somewhat ad-hoc
+                                # depending on the foundry;  may not be generally
+                                # applicable.
+
+                                if newmacro['class'] == 'ENDCAP':
+                                    if newmacro['subclass'] == 'TOPLEFT':
+                                        self.pad_rotation = 90
+                                    elif newmacro['subclass'] == 'TOPRIGHT':
+                                        self.pad_rotation = 180
+                                    elif newmacro['subclass'] == 'BOTTOMRIGHT':
+                                        self.pad_rotation = 270
+                            else:
+                                newmacro['subclass'] = None
+                        elif iparse[0] == 'SIZE':
+                            newmacro['width'] = float(iparse[1])
+                            newmacro['height'] = float(iparse[3])
+                        elif iparse[0] == 'ORIGIN':
+                            newmacro['x'] = float(iparse[1])
+                            newmacro['y'] = float(iparse[2])
+        return macros
+          
+    # Read a list of cell names from a verilog file
+    # If filename is relative, then check in the same directory as the verilog
+    # top-level netlist (vlogpath) and in the subdirectory 'source/' of the top-
+    # level directory.  Also check in the ~/design/ip/ directory.  These are
+    # common include paths for the simulation.
+
+    def read_verilog_lib(self, incpath, vlogpath):
+        iocells = []
+        if not os.path.isfile(incpath) and incpath[0] != '/':
+            locincpath = vlogpath + '/' + incpath
+            if not os.path.isfile(locincpath):
+                locincpath = vlogpath + '/source/' + incpath
+            if not os.path.isfile(locincpath):
+                projectpath = os.path.split(vlogpath)[0]
+                designpath = os.path.split(projectpath)[0]
+                locincpath = designpath + '/ip/' + incpath
+        else:
+            locincpath = incpath
+
+        if not os.path.isfile(locincpath):
+            self.print('File ' + incpath + ' not found (at ' + locincpath + ')!')
+        else:
+            self.print('Reading verilog library ' + locincpath)
+            with open(locincpath, 'r') as ifile:
+                ilines = ifile.read().splitlines()
+                for iline in ilines:
+                    iparse = re.split('[\t ()]', iline)
+                    while '' in iparse:
+                        iparse.remove('')
+                    if iparse == []:
+                        continue
+                    elif iparse[0] == 'module':
+                        iocells.append(iparse[1])
+        return iocells
+
+    # Generate a LEF abstract view from a magic layout.  If "outpath" is not
+    # "None", then write output to outputpath (this is required if the input
+    # file is in a read-only directory).
+
+    def write_lef_file(self, magfile, outpath=None):
+        mag_path = os.path.split(magfile)[0]
+        magfullname = os.path.split(magfile)[1]
+        module = os.path.splitext(magfullname)[0]
+
+        if outpath:
+            write_path = outpath
+        else:
+            write_path = mag_path
+
+        self.print('Generating LEF view from layout for module ' + module)
+
+        with open(write_path + '/pfg_write_lef.tcl', 'w') as ofile:
+            print('drc off', file=ofile)
+            print('box 0 0 0 0', file=ofile)
+            # NOTE:  Using "-force" option in case an IP with a different but
+            # compatible tech is used (e.g., EFHX035A IP inside EFXH035C).  This
+            # is not checked for legality!
+            if outpath:
+                print('load ' + magfile + ' -force', file=ofile)
+            else:
+                print('load ' + module + ' -force', file=ofile)
+            print('lef write -hide', file=ofile)
+            print('quit', file=ofile)
+
+        magicexec = self.magic_path if self.magic_path else 'magic'
+        mproc = subprocess.Popen([magicexec, '-dnull', '-noconsole',
+			'pfg_write_lef.tcl'],
+			stdin = subprocess.PIPE, stdout = subprocess.PIPE,
+			stderr = subprocess.PIPE, cwd = write_path, universal_newlines = True)
+
+        self.watch(mproc)
+        os.remove(write_path + '/pfg_write_lef.tcl')
+
+    # Watch a running process, polling for output and updating the GUI message
+    # window as output arrives.  Return only when the process has exited.
+    # Note that this process cannot handle stdin(), so any input to the process
+    # must be passed from a file.
+
+    def watch(self, process):
+        if process == None:
+            return
+
+        while True:
+            status = process.poll()
+            if status != None:
+                try:
+                    outputpair = process.communicate(timeout=1)
+                except ValueError:
+                    self.print("Process forced stop, status " + str(status))
+                else:
+                    for line in outputpair[0].splitlines():
+                        self.print(line)
+                    for line in outputpair[1].splitlines():
+                        self.print(line, file=sys.stderr)
+                break
+            else:
+                sresult = select.select([process.stdout, process.stderr], [], [], 0)[0]
+                if process.stdout in sresult:
+                    outputline = process.stdout.readline().strip()
+                    self.print(outputline)
+                elif process.stderr in sresult:
+                    outputline = process.stderr.readline().strip()
+                    self.print(outputline, file=sys.stderr)
+                else:
+                    self.update_idletasks()
+
+    # Reimport the pad list by reading the top-level verilog netlist.  Determine
+    # what pads are listed in the file, and check against the existing pad list.
+
+    # The verilog/ directory should have a .v file containing a module of the
+    # same name as self.project (ip-name).  The .v filename should have the
+    # same name as well (but not necessarily).  To do:  Handle import of
+    # projects having a top-level schematic instead of a verilog netlist.
+
+    def vlogimport(self):
+
+        if self.ef_format:
+            config_dir = '/.ef-config'
+        else:
+            config_dir = '/.config'
+
+        # First find the process PDK name for this project.  Read the nodeinfo.json
+        # file and find the list of I/O cell libraries.  
+
+        if self.techpath:
+            pdkpath = self.techpath
+        elif os.path.islink(self.projectpath + config_dir + '/techdir'):
+            pdkpath = os.path.realpath(self.projectpath + config_dir + '/techdir')
+        else:
+            self.print('Error:  Cannot determine path to PDK.  Try using option -tech-path=')
+            return
+
+        self.print('Importing verilog sources.')
+
+        nodeinfopath = pdkpath + config_dir + '/nodeinfo.json'
+        ioleflist = []
+        if os.path.exists(nodeinfopath):
+            self.print('Reading known I/O cell libraries from ' + nodeinfopath)
+            with open(nodeinfopath, 'r') as ifile:
+                itop = json.load(ifile)
+                if 'iocells' in itop:
+                    ioleflist = []
+                    for iolib in itop['iocells']:
+                        if '/' in iolib:
+                            # Entries <lib>/<cell> refer to specific files
+                            if self.ef_format:
+                                iolibpath = pdkpath + '/libs.ref/lef/' + iolib
+                            else:
+                                iolibpath = pdkpath + '/libs.ref/' + iolib
+                            if os.path.splitext(iolib)[1] == '':
+                                if not os.path.exists(iolibpath):
+                                    iolibpath = iolibpath + '.lib'
+                            if not os.path.exists(iolibpath):
+                                self.print('Warning: nodeinfo.json bad I/O library path ' + iolibpath)
+                            ioleflist.append(iolibpath)
+                        else:
+                            # All other entries refer to everything in the directory.
+                            if self.ef_format:
+                                iolibpath = pdkpath + '/libs.ref/lef/' + iolib
+                            else:
+                                iolibpath = pdkpath + '/libs.ref/' + iolib + '/lef/'
+                            iolibfiles = glob.glob(iolibpath + '/*.lef')
+                            if len(iolibfiles) == 0:
+                                self.print('Warning: nodeinfo.json bad I/O library path ' + iolibpath)
+                            ioleflist.extend(iolibfiles)
+        else:
+            # Diagnostic
+            self.print('Cannot read PDK information file ' + nodeinfopath)
+
+        # Fallback behavior:  List everything in libs.ref/lef/ beginning with "IO"
+        if len(ioleflist) == 0:
+            if self.ef_format:
+                ioleflist = glob.glob(pdkpath + '/libs.ref/lef/IO*/*.lef')
+            else:
+                ioleflist = glob.glob(pdkpath + '/libs.ref/IO*/lef/*.lef')
+
+        if len(ioleflist) == 0:
+            self.print('Cannot find any I/O cell libraries for this technology')
+            return
+
+        # Read the LEF libraries to get a list of all available cells.  Keep
+        # this list of cells in "celldefs".
+
+        celldefs = []
+        ioliblist = []
+        ioleflibs = []
+        for iolib in ioleflist:
+            iolibpath = os.path.split(iolib)[0]
+            iolibfile = os.path.split(iolib)[1]
+            ioliblist.append(os.path.split(iolibpath)[1])
+            celldefs.extend(self.read_lef_macros(iolibpath, iolibfile, 'iolib'))
+
+        verilogcells = []
+        newpadlist = []
+        coredefs = []
+        corecells = []
+        corecelllist = []
+        lefprocessed = []
+
+        busrex = re.compile('.*\[[ \t]*([0-9]+)[ \t]*:[ \t]*([0-9]+)[ \t]*\]')
+
+        vlogpath = self.projectpath + '/verilog'
+        vlogfile = vlogpath + '/' + self.project + '.v'
+
+        # Verilog netlists are too difficult to parse from a simple script.
+        # Use qflow tools to convert to SPICE, if they are available.  Parse
+        # the verilog only for "include" statements to find the origin of
+        # the various IP blocks, and then parse the SPICE file to get the
+        # full list of instances.
+        #
+        # (to be done)
+
+        if os.path.isfile(vlogfile):
+            with open(vlogfile, 'r') as ifile:
+                vloglines = ifile.read().splitlines()
+                for vlogline in vloglines:
+                    vlogparse = re.split('[\t ()]', vlogline)
+                    while '' in vlogparse:
+                        vlogparse.remove('')
+                    if vlogparse == []:
+                        continue
+                    elif vlogparse[0] == '//':
+                        continue
+                    elif vlogparse[0] == '`include':
+                        incpath = vlogparse[1].strip('"')
+                        libpath = os.path.split(incpath)[0]
+                        libname = os.path.split(libpath)[1]
+                        libfile = os.path.split(incpath)[1]
+
+                        # Read the verilog library for module names to match
+                        # against macro names in celldefs.
+                        modulelist = self.read_verilog_lib(incpath, vlogpath)
+                        matching = list(item for item in celldefs if item['name'] in modulelist)
+                        for imatch in matching:
+                            verilogcells.append(imatch['name'])
+                            leffile = imatch['iolib']
+                            if leffile not in ioleflibs:
+                                ioleflibs.append(leffile)
+
+                        # Read a corresponding LEF file entry for non-I/O macros, if one
+                        # can be found (this handles files in the PDK).
+                        if len(matching) == 0:
+                            if libname != '':
+                                # (NOTE:  Assumes full path starting with '/')
+                                lefpath = libpath.replace('verilog', 'lef')
+                                lefname = libfile.replace('.v', '.lef')
+                                if not os.path.exists(lefpath + '/' + lefname):
+                                    leffiles = glob.glob(lefpath + '/*.lef')
+                                else:
+                                    leffiles = [lefpath + '/' + lefname]
+
+                                for leffile in leffiles:
+                                    if leffile in ioleflibs:
+                                        continue
+                                    elif leffile in lefprocessed:
+                                        continue
+                                    else:
+                                        lefprocessed.append(leffile)
+
+                                    lefname = os.path.split(leffile)[1]
+
+                                    newcoredefs = self.read_lef_macros(lefpath, lefname, 'celllib')
+                                    coredefs.extend(newcoredefs)
+                                    corecells.extend(list(item['name'] for item in newcoredefs))
+
+                                if leffiles == []:
+                                    maglefname = libfile.replace('.v', '.mag')
+
+                                    # Handle PDK files with a maglef/ view but no LEF file.
+                                    maglefpath = libpath.replace('verilog', 'maglef')
+                                    if not os.path.exists(maglefpath + '/' + maglefname):
+                                        magleffiles = glob.glob(maglefpath + '/*.mag')
+                                    else:
+                                        magleffiles = [maglefpath + '/' + maglefname]
+
+                                    if magleffiles == []:
+                                        # Handle user ip/ files with a maglef/ view but
+                                        # no LEF file.
+                                        maglefpath = libpath.replace('verilog', 'maglef')
+                                        designpath = os.path.split(self.projectpath)[0]
+                                        maglefpath = designpath + '/ip/' + maglefpath
+
+                                        if not os.path.exists(maglefpath + '/' + maglefname):
+                                            magleffiles = glob.glob(maglefpath + '/*.mag')
+                                        else:
+                                            magleffiles = [maglefpath + '/' + maglefname]
+
+                                    for magleffile in magleffiles:
+                                        # Generate LEF file.  Since PDK and ip/ entries
+                                        # are not writeable, write into the project mag/
+                                        # directory.
+                                        magpath = self.projectpath + '/mag'
+                                        magname = os.path.split(magleffile)[1]
+                                        magroot = os.path.splitext(magname)[0]
+                                        leffile = magpath + '/' + magroot + '.lef'
+                                        if not os.path.isfile(leffile):
+                                            self.write_lef_file(magleffile, magpath)
+
+                                        if leffile in ioleflibs:
+                                            continue
+                                        elif leffile in lefprocessed:
+                                            continue
+                                        else:
+                                            lefprocessed.append(leffile)
+
+                                        lefname = os.path.split(leffile)[1]
+
+                                        newcoredefs = self.read_lef_macros(magpath, lefname, 'celllib')
+                                        coredefs.extend(newcoredefs)
+                                        corecells.extend(list(item['name'] for item in newcoredefs))
+                                        # LEF files generated on-the-fly are not needed
+                                        # after they have been parsed.
+                                        # os.remove(leffile)
+
+                            # Check if all modules in modulelist are represented by
+                            # corresponding LEF macros.  If not, then go looking for a LEF
+                            # file in the mag/ or maglef/ directory.  Then, go looking for
+                            # a .mag file in the mag/ or maglef/ directory, and build a
+                            # LEF macro from it.
+
+                            matching = list(item['name'] for item in coredefs if item['name'] in modulelist)
+                            for module in modulelist:
+                                if module not in matching:
+                                    lefpath = self.projectpath + '/lef'
+                                    magpath = self.projectpath + '/mag'
+                                    maglefpath = self.projectpath + '/mag'
+                                    lefname = libfile.replace('.v', '.lef')
+
+                                    # If the verilog file root name is not the same as
+                                    # the module name, then make a quick check for a
+                                    # LEF file with the same root name as the verilog.
+                                    # That indicates that the module does not exist in
+                                    # the LEF file, probably because it is a primary
+                                    # module that does not correspond to any layout.
+
+                                    leffile = lefpath + '/' + lefname
+                                    if os.path.exists(leffile):
+                                        self.print('Diagnostic: module ' + module + ' is not in ' + leffile + ' (probably a primary module)')
+                                        continue
+
+                                    leffile = magpath + '/' + lefname
+                                    istemp = False
+                                    if not os.path.exists(leffile):
+                                        magname = libfile.replace('.v', '.mag')
+                                        magfile = magpath + '/' + magname
+                                        if os.path.exists(magfile):
+                                            self.print('Diagnostic: Found a .mag file for ' + module + ' in ' + magfile)
+                                            self.write_lef_file(magfile)
+                                            istemp = True
+                                        else:
+                                            magleffile = maglefpath + '/' + lefname
+                                            if not os.path.exists(magleffile):
+                                                self.print('Diagnostic: (module ' + module + ') has no LEF file ' + leffile + ' or ' + magleffile)
+                                                magleffile = maglefpath + '/' + magname
+                                                if os.path.exists(magleffile):
+                                                    self.print('Diagnostic: Found a .mag file for ' + module + ' in ' + magleffile)
+                                                    if os.access(maglefpath, os.W_OK):
+                                                        self.write_lef_file(magleffile)
+                                                        leffile = magleffile
+                                                        istemp = True
+                                                    else:
+                                                        self.write_lef_file(magleffile, magpath)
+                                                else:
+                                                    self.print('Did not find a file ' + magfile)
+                                                    # self.print('Warning: module ' + module + ' has no LEF or .mag views')
+                                                    pass
+                                            else:
+                                                self.print('Diagnostic: Found a LEF file for ' + module + ' in ' + magleffile)
+                                                leffile = magleffile
+                                    else:
+                                        self.print('Diagnostic: Found a LEF file for ' + module + ' in ' + leffile)
+
+                                    if os.path.exists(leffile):
+                                        if leffile in lefprocessed:
+                                            continue
+                                        else:
+                                            lefprocessed.append(leffile)
+
+                                        newcoredefs = self.read_lef_macros(magpath, lefname, 'celllib')
+                                        # The LEF file generated on-the-fly is not needed
+                                        # any more after parsing the macro(s).
+                                        # if istemp:
+                                        #     os.remove(leffile)
+                                        coredefs.extend(newcoredefs)
+                                        corecells.extend(list(item['name'] for item in newcoredefs))
+                                    else:
+                                        # self.print('Failed to find a LEF view for module ' + module)
+                                        pass
+
+                    elif vlogparse[0] in verilogcells:
+                        # Check for array of pads
+                        bushigh = buslow = -1
+                        if len(vlogparse) >= 3:
+                            bmatch = busrex.match(vlogline)
+                            if bmatch:
+                                bushigh = int(bmatch.group(1))
+                                buslow = int(bmatch.group(2))
+                                
+                        for i in range(buslow, bushigh + 1):
+                            newpad = {}
+                            if i >= 0:
+                                newpad['name'] = vlogparse[1] + '[' + str(i) + ']'
+                            else:
+                                newpad['name'] = vlogparse[1]
+                            newpad['cell'] = vlogparse[0]
+                            padcell = next(item for item in celldefs if item['name'] == vlogparse[0])
+                            newpad['iolib'] = padcell['iolib']
+                            newpad['class'] = padcell['class']
+                            newpad['subclass'] = padcell['subclass']
+                            newpad['width'] = padcell['width']
+                            newpad['height'] = padcell['height']
+                            newpadlist.append(newpad)
+
+                    elif vlogparse[0] in corecells:
+                        # Check for array of cells
+                        bushigh = buslow = -1
+                        if len(vlogparse) >= 3:
+                            bmatch = busrex.match(vlogline)
+                            if bmatch:
+                                bushigh = int(bmatch.group(1))
+                                buslow = int(bmatch.group(2))
+                                
+                        for i in range(buslow, bushigh + 1):
+                            newcorecell = {}
+                            if i >= 0:
+                                newcorecell['name'] = vlogparse[1] + '[' + str(i) + ']'
+                            else:
+                                newcorecell['name'] = vlogparse[1]
+                            newcorecell['cell'] = vlogparse[0]
+                            corecell = next(item for item in coredefs if item['name'] == vlogparse[0])
+                            newcorecell['celllib'] = corecell['celllib']
+                            newcorecell['class'] = corecell['class']
+                            newcorecell['subclass'] = corecell['subclass']
+                            newcorecell['width'] = corecell['width']
+                            newcorecell['height'] = corecell['height']
+                            corecelllist.append(newcorecell)
+
+        self.print('')
+        self.print('Source file information:')
+        self.print('Source filename: ' + vlogfile)
+        self.print('Number of I/O libraries is ' + str(len(ioleflibs)))
+        self.print('Number of library cells in I/O libraries used: ' + str(len(verilogcells)))
+        self.print('Number of core celldefs is ' + str(len(coredefs)))
+        self.print('')
+        self.print('Number of I/O cells in design: ' + str(len(newpadlist)))
+        self.print('Number of core cells in design: ' + str(len(corecelllist)))
+        self.print('')
+
+        # Save the results
+        self.celldefs = celldefs
+        self.coredefs = coredefs
+        self.vlogpads = newpadlist
+        self.corecells = corecelllist
+        self.ioleflibs = ioleflibs
+
+    # Check self.vlogpads, which was created during import (above) against
+    # self.(N,S,W,E)pads, which was read from the DEF file (if there was one)
+    # Also check self.corecells, which was created during import against
+    # self.coregroup, which was read from the DEF file.
+
+    def resolve(self):
+        self.print('Resolve differences in verilog and LEF views.')
+
+        samepads = []
+        addedpads = []
+        removedpads = []
+
+        # (1) Create list of entries that are in both self.vlogpads and self.()pads
+        # (2) Create list of entries that are in self.vlogpads but not in self.()pads
+
+        allpads = self.Npads + self.NEpad + self.Epads + self.SEpad + self.Spads + self.SWpad + self.Wpads + self.NWpad
+
+        for pad in self.vlogpads:
+            newpadname = pad['name']
+            try:
+                lpad = next(item for item in allpads if item['name'] == newpadname) 
+            except:
+                addedpads.append(pad)
+            else:
+                samepads.append(lpad)
+
+        # (3) Create list of entries that are in allpads but not in self.vlogpads
+        for pad in allpads:
+            newpadname = pad['name']
+            try:
+                lpad = next(item for item in self.vlogpads if item['name'] == newpadname) 
+            except:
+                removedpads.append(pad)
+
+        # Print results
+        if len(addedpads) > 0:
+            self.print('Added pads:')
+            for pad in addedpads:
+                self.print(pad['name'] + ' (' + pad['cell'] + ')')
+
+        if len(removedpads) > 0:
+            plist = []
+            nspacers = 0
+            for pad in removedpads:
+                if 'subclass' in pad:
+                    if pad['subclass'] != 'SPACER':
+                        plist.append(pad)
+                    else:
+                        nspacers += 1
+
+            if nspacers > 0:
+                self.print(str(nspacers) + ' spacer cells ignored.')
+            if len(plist) > 0:
+                self.print('Removed pads:')
+                for pad in removedpads:
+                    self.print(pad['name'] + ' (' + pad['cell'] + ')')
+
+        if len(addedpads) + len(removedpads) == 0:
+            self.print('Pad list has not changed.')
+
+        # Remove all cells from the "removed" list, with comment
+
+        allpads = [self.Npads, self.NEpad, self.Epads, self.SEpad, self.Spads, self.SWpad, self.Wpads, self.NWpad]
+
+        for pad in removedpads:
+            rname = pad['name']
+            for row in allpads:
+                try:
+                    rpad = next(item for item in row if item['name'] == rname)
+                except:
+                    rpad = None
+                else:
+                    row.remove(rpad)
+
+        # Now the verilog file has no placement information, so the old padlist
+        # entries (if they exist) are preferred.  Add to these the new padlist
+        # entries
+
+        # First pass for unassigned pads:  Use of "CLASS ENDCAP" is preferred
+        # for identifying corner pads.  Otherwise, if 'CORNER' or 'corner' is
+        # in the pad name, then make sure there is one per row in the first
+        # position.  This is not foolproof and depends on the cell library
+        # using the text 'corner' in the name of the corner cell.  However,
+        # if the ad hoc methods fail, the user can still manually move the
+        # corner cells to the right place (to be done:  Identify if library
+        # uses ENDCAP designation for corner cells up front;  don't go
+        # looking for 'corner' text if the cells are easily identifiable by
+        # LEF class).
+
+        for pad in addedpads[:]:
+            iscorner = False
+            if 'class' in pad and pad['class'] == 'ENDCAP':
+                iscorner = True
+            elif 'CORNER' in pad['cell'].upper():
+                iscorner = True
+           
+            if iscorner:
+                if self.NWpad == []:
+                    self.NWpad.append(pad)
+                    pad['o'] = 'E'
+                    addedpads.remove(pad)
+                elif self.NEpad == []:
+                    self.NEpad.append(pad)
+                    pad['o'] = 'S'
+                    addedpads.remove(pad)
+                elif self.SEpad == []:
+                    self.SEpad.append(pad)
+                    pad['o'] = 'W'
+                    addedpads.remove(pad)
+                elif self.SWpad == []:
+                    self.SWpad.append(pad)
+                    pad['o'] = 'N'
+                    addedpads.remove(pad)
+
+        numN = len(self.Npads)
+        numS = len(self.Spads)
+        numE = len(self.Epads)
+        numW = len(self.Wpads)
+
+        minnum = min(numN, numS, numE, numW)
+        minnum = max(minnum, int(len(addedpads) / 4))
+
+        # Add pads in clockwise order.  Note that S and E pads are defined counterclockwise
+        for pad in addedpads:
+            if numN < minnum:
+                self.Npads.append(pad)
+                numN += 1
+                pad['o'] = 'S'
+                self.print("Adding pad " + pad['name'] + " to Npads")
+            elif numE < minnum:
+                self.Epads.insert(0, pad)
+                numE += 1
+                pad['o'] = 'W'
+                self.print("Adding pad " + pad['name'] + " to Epads")
+            elif numS < minnum:
+                self.Spads.insert(0, pad)
+                numS += 1
+                pad['o'] = 'N'
+                self.print("Adding pad " + pad['name'] + " to Spads")
+            # elif numW < minnum:
+            else:
+                self.Wpads.append(pad)
+                numW += 1
+                pad['o'] = 'E'
+                self.print("Adding pad " + pad['name'] + " to Wpads")
+
+            minnum = min(numN, numS, numE, numW)
+            minnum = max(minnum, int(len(addedpads) / 4))
+
+        # Make sure all pads have included information from the cell definition
+
+        allpads = self.Npads + self.NEpad + self.Epads + self.SEpad + self.Spads + self.SWpad + self.Wpads + self.NWpad
+
+        for pad in allpads:
+            if 'width' not in pad:
+                try:
+                    celldef = next(item for item in celldefs if item['name'] == pad['cell'])
+                except:
+                    self.print('Cell ' + pad['cell'] + ' not found!')
+                else:
+                    pad['width'] = celldef['width']
+                    pad['height'] = celldef['height']
+                    pad['class'] = celldef['class']
+                    pad['subclass'] = celldef['subclass']
+
+        # Now treat the core cells in the same way (resolve list parsed from verilog
+        # against the list parsed from DEF)	
+
+        # self.print('Diagnostic: ')
+        # self.print('self.corecells = ' + str(self.corecells))
+        # self.print('self.coregroup = ' + str(self.coregroup))
+
+        samecore = []
+        addedcore = []
+        removedcore = []
+
+        # (1) Create list of entries that are in both self.corecells and self.coregroup
+        # (2) Create list of entries that are in self.corecells but not in self.coregroup
+
+        for cell in self.corecells:
+            newcellname = cell['name']
+            try:
+                lcore = next(item for item in self.coregroup if item['name'] == newcellname) 
+            except:
+                addedcore.append(cell)
+            else:
+                samecore.append(lcore)
+
+        # (3) Create list of entries that are in self.coregroup but not in self.corecells
+        for cell in self.coregroup:
+            newcellname = cell['name']
+            try:
+                lcore = next(item for item in self.corecells if item['name'] == newcellname) 
+            except:
+                removedcore.append(cell)
+
+        # Print results
+        if len(addedcore) > 0:
+            self.print('Added core cells:')
+            for cell in addedcore:
+                self.print(cell['name'] + ' (' + cell['cell'] + ')')
+
+        if len(removedcore) > 0:
+            clist = []
+            for cell in removedcore:
+                clist.append(cell)
+
+            if len(clist) > 0:
+                self.print('Removed core cells:')
+                for cell in removedcore:
+                    self.print(cell['name'] + ' (' + cell['cell'] + ')')
+
+        if len(addedcore) + len(removedcore) == 0:
+            self.print('Core cell list has not changed.')
+
+        # Remove all cells from the "removed" list
+
+        coregroup = self.coregroup
+        for cell in removedcore:
+            rname = cell['name']
+            try:
+                rcell = next(item for item in coregroup if item['name'] == rname)
+            except:
+                rcell = None
+            else:
+                coregroup.remove(rcell)
+
+        # Add all cells from the "added" list to coregroup
+
+        for cell in addedcore:
+            rname = cell['name']
+            try:
+                rcell = next(item for item in coregroup if item['name'] == rname)
+            except:
+                coregroup.append(cell)
+                if not 'o' in cell:
+                    cell['o'] = 'N'
+                if not 'x' in cell:
+                    if len(self.Wpads) > 0:
+                        pad = self.Wpads[0]
+                        padx = pad['x'] if 'x' in pad else 0
+                        cell['x'] = padx + pad['height'] + self.margin
+                    else:
+                        cell['x'] = self.margin
+                if not 'y' in cell:
+                    if len(self.Spads) > 0:
+                        pad = self.Spads[0]
+                        pady = pad['y'] if 'y' in pad else 0
+                        cell['y'] = pady + pad['height'] + self.margin
+                    else:
+                        cell['y'] = self.margin
+            else:
+                rcell = None
+
+        # Make sure all core cells have included information from the cell definition
+
+        for cell in coregroup:
+            if 'width' not in cell:
+                try:
+                    coredef = next(item for item in coredefs if item['name'] == cell['cell'])
+                except:
+                    self.print('Cell ' + cell['cell'] + ' not found!')
+                else:
+                    cell['width'] = coredef['width']
+                    cell['height'] = coredef['height']
+                    cell['class'] = coredef['class']
+                    cell['subclass'] = coredef['subclass']
+
+    # Generate a new padframe by writing the configuration file, running
+    # padring, reading back the DEF file, and (re)poplulating the workspace
+
+    def generate(self, level):
+        self.print('Generate legal padframe using padring')
+
+        # Write out the configuration file
+        self.writeconfig()
+        # Run the padring app
+        self.runpadring()
+        # Rotate pads in the output if pad orientations are different from
+        # padring's expectations
+        self.rotate_pads_in_def()
+        # Read the placement information back from the generated DEF file
+        self.readplacement()
+        # Resolve differences (e.g., remove spacers)
+        self.resolve()
+        # Recreate and draw the padframe view on the canvas
+        self.populate(level + 1)
+        self.frame_configure(None)
+
+    # Write a new configuration file
+
+    def writeconfig(self):
+        mag_path = self.projectpath + '/mag'
+        if not os.path.exists(mag_path):
+            self.print('Error:  No project path /mag directory exists.  Cannot write config file.')
+            return
+
+        self.print('Writing padring configuration file.')
+
+        # Determine cell width and height from pad sizes.
+        # NOTE:  This compresses the chip to the minimum dimensions
+        # allowed by the arrangement of pads.  Use a "core" block to
+        # force the area larger than minimum (not yet implemented)
+        
+        topwidth = 0
+        for pad in self.Npads:
+            if 'width' not in pad:
+                self.print('No width: pad = ' + str(pad))
+            topwidth += pad['width']
+
+        # Add in the corner cells
+        if self.NWpad != []:
+            topwidth += self.NWpad[0]['height']
+        if self.NEpad != []:
+            topwidth += self.NEpad[0]['width']
+
+        botwidth = 0
+        for pad in self.Spads:
+            botwidth += pad['width']
+
+        # Add in the corner cells
+        if self.SWpad != []:
+            botwidth += self.SWpad[0]['width']
+        if self.SEpad != []:
+            botwidth += self.SEpad[0]['height']
+
+        width = max(botwidth, topwidth)
+
+        # if width < self.urx - self.llx:
+        #     width = self.urx - self.llx
+
+        leftheight = 0
+        for pad in self.Wpads:
+            leftheight += pad['width']
+
+        # Add in the corner cells
+        if self.NWpad != []:
+            leftheight += self.NWpad[0]['height']
+        if self.SWpad != []:
+            leftheight += self.SWpad[0]['width']
+
+        rightheight = 0
+        for pad in self.Epads:
+            rightheight += pad['width']
+
+        # Add in the corner cells
+        if self.NEpad != []:
+            rightheight += self.NEpad[0]['width']
+        if self.SEpad != []:
+            rightheight += self.SEpad[0]['height']
+
+        height = max(leftheight, rightheight)
+
+        # Check the dimensions of the core cells.  If they exceed the available
+        # padframe area, then expand the padframe to accomodate the core.
+
+        corellx = coreurx = (self.llx + self.urx) / 2
+        corelly = coreury = (self.lly + self.ury) / 2
+
+        for corecell in self.coregroup:
+            corient = corecell['o']
+            if 'S' in corient or 'N' in corient:
+                cwidth = corecell['width']
+                cheight = corecell['height']
+            else:
+                cwidth = corecell['height']
+                cheight = corecell['width']
+
+            if corecell['x'] < corellx:
+                corellx = corecell['x']
+            if corecell['x'] + cwidth > coreurx:
+                coreurx = corecell['x'] + cwidth
+            if corecell['y'] < corelly:
+                corelly = corecell['y']
+            if corecell['y'] + cheight > coreury:
+                coreury = corecell['y'] + cheight
+
+        coreheight = coreury - corelly
+        corewidth = coreurx - corellx
+
+        # Ignoring the possibility of overlaps with nonstandard-sized pads,
+        # assuming that the user has visually separated them.  Only check
+        # the core bounds against the standard padframe inside edge.
+
+        if self.SWpad != [] and self.SEpad != []:
+            if corewidth > width - self.SWpad[0]['width'] - self.SEpad[0]['width']:
+                width = corewidth + self.SWpad[0]['width'] + self.SEpad[0]['width']
+        if self.NWpad != [] and self.SWpad != []:
+            if coreheight > height - self.NWpad[0]['height'] - self.SWpad[0]['height']:
+                height = coreheight + self.NWpad[0]['height'] + self.SWpad[0]['height']
+
+        # Core cells are given a margin of self.margin from the pad inside edge, so the
+        # core area passed to the padring app is 2 * self.margin larger than the
+        # measured size of the core area.
+        width += 2 * self.margin
+        height += 2 * self.margin
+
+        if self.keep_cfg == False or not os.path.exists(mag_path + '/padframe.cfg'):
+
+            if os.path.exists(mag_path + '/padframe.cfg'):
+                # Copy the previous padframe.cfg file to a backup.  In case something
+                # goes badly wrong, this should be the only file overwritten, and can
+                # be recovered from the backup.
+                shutil.copy(mag_path + '/padframe.cfg', mag_path + '/padframe.cfg.bak')
+
+            with open(mag_path + '/padframe.cfg', 'w') as ofile:
+                print('AREA ' + str(int(width)) + ' ' + str(int(height)) + ' ;',
+			file=ofile)
+                print('', file=ofile)
+                for pad in self.NEpad:
+                    print('CORNER ' + pad['name'] + ' SW ' + pad['cell'] + ' ;',
+			    file=ofile)
+                for pad in self.SEpad:
+                    print('CORNER ' + pad['name'] + ' NW ' + pad['cell'] + ' ;',
+			    file=ofile)
+                for pad in self.SWpad:
+                    print('CORNER ' + pad['name'] + ' NE ' + pad['cell'] + ' ;',
+			    file=ofile)
+                for pad in self.NWpad:
+                    print('CORNER ' + pad['name'] + ' SE ' + pad['cell'] + ' ;',
+			    file=ofile)
+                for pad in self.Npads:
+                    flip = 'F ' if 'F' in pad['o'] else ''
+                    print('PAD ' + pad['name'] + ' N ' + flip + pad['cell'] + ' ;',
+			    file=ofile)
+                for pad in self.Epads:
+                    flip = 'F ' if 'F' in pad['o'] else ''
+                    print('PAD ' + pad['name'] + ' E ' + flip + pad['cell'] + ' ;',
+			    file=ofile)
+                for pad in self.Spads:
+                    flip = 'F ' if 'F' in pad['o'] else ''
+                    print('PAD ' + pad['name'] + ' S ' + flip + pad['cell'] + ' ;',
+			    file=ofile)
+                for pad in self.Wpads:
+                    flip = 'F ' if 'F' in pad['o'] else ''
+                    print('PAD ' + pad['name'] + ' W ' + flip + pad['cell'] + ' ;',
+			    file=ofile)
+
+    # Run the padring app.
+
+    def runpadring(self):
+        mag_path = self.projectpath + '/mag'
+        if not os.path.exists(mag_path):
+            self.print('No path /mag exists in project space;  cannot run padring.')
+            return
+
+        self.print('Running padring')
+
+        if self.padring_path:
+            padringopts = [self.padring_path]
+        else:
+            padringopts = ['padring']
+
+        # Diagnostic
+        # self.print('Used libraries (self.ioleflibs) = ' + str(self.ioleflibs))
+
+        for iolib in self.ioleflibs:
+            padringopts.append('-L')
+            padringopts.append(iolib)
+        padringopts.append('--def')
+        padringopts.append('padframe.def')
+        padringopts.append('padframe.cfg')
+
+        self.print('Running ' + str(padringopts))
+   
+        p = subprocess.Popen(padringopts, stdout = subprocess.PIPE,
+		    stderr = subprocess.PIPE, cwd = mag_path)
+        self.watch(p)
+
+    # Read placement information from the DEF file generated by padring.
+
+    def readplacement(self, precheck=False):
+        self.print('Reading placement information from DEF file')
+
+        mag_path = self.projectpath + '/mag'
+        if not os.path.isfile(mag_path + '/padframe.def'):
+            if not precheck:
+                self.print('No file padframe.def:  pad frame was not generated.')
+            return False 
+
+        # Very simple DEF file parsing.  The placement DEF only contains a
+        # COMPONENTS section.  Certain assumptions are made about the syntax
+        # that depends on the way 'padring' writes its output.  This is not
+        # a rigorous DEF parser!
+
+        units = 1000
+        in_components = False
+        Npadlist = []
+        Spadlist = []
+        Epadlist = []
+        Wpadlist = []
+        NEpad = []
+        NWpad = []
+        SEpad = []
+        SWpad = []
+        coregroup = []
+
+        # Reset bounds
+        self.llx = self.lly = self.urx = self.ury = 0
+        corners = 0
+
+        with open(mag_path + '/padframe.def', 'r') as ifile:
+            deflines = ifile.read().splitlines()
+            for line in deflines:
+                if 'UNITS DISTANCE MICRONS' in line:
+                    units = line.split()[3]
+                elif in_components:
+                    lparse = line.split()
+                    if lparse[0] == '-':
+                        instname = lparse[1]
+                        cellname = lparse[2]
+                        
+                    elif lparse[0] == '+':
+                        if lparse[1] == 'PLACED':
+                            placex = lparse[3]
+                            placey = lparse[4]
+                            placeo = lparse[6]
+
+                            newpad = {}
+                            newpad['name'] = instname
+                            newpad['cell'] = cellname
+
+                            try:
+                                celldef = next(item for item in self.celldefs if item['name'] == cellname)
+                            except:
+                                celldef = None
+                            else:
+                                newpad['iolib'] = celldef['iolib']
+                                newpad['width'] = celldef['width']
+                                newpad['height'] = celldef['height']
+                                newpad['class'] = celldef['class']
+                                newpad['subclass'] = celldef['subclass']
+ 
+                            newpad['x'] = float(placex) / float(units)
+                            newpad['y'] = float(placey) / float(units)
+                            newpad['o'] = placeo
+
+                            # Adjust bounds
+                            if celldef:
+                                if newpad['x'] < self.llx:
+                                    self.llx = newpad['x']
+                                if newpad['y'] < self.lly:
+                                    self.lly = newpad['y']
+
+                                if newpad['o'] == 'N' or newpad['o'] == 'S':
+                                    padurx = newpad['x'] + celldef['width']
+                                    padury = newpad['y'] + celldef['height']
+                                else:
+                                    padurx = newpad['x'] + celldef['height']
+                                    padury = newpad['y'] + celldef['width']
+
+                                if padurx > self.urx:
+                                    self.urx = padurx
+                                if padury > self.ury:
+                                    self.ury = padury
+
+                            # First four entries in the DEF file are corners
+                            # padring puts the lower left corner at zero, so
+                            # use the zero coordinates to determine which pads
+                            # are which.  Note that padring assumes the corner
+                            # pad is drawn in the SW corner position!
+
+                            if corners < 4:
+                                if newpad['x'] == 0 and newpad['y'] == 0:
+                                    SWpad.append(newpad)
+                                elif newpad['x'] == 0:
+                                    NWpad.append(newpad)
+                                elif newpad['y'] == 0:
+                                    SEpad.append(newpad)
+                                else:
+                                    NEpad.append(newpad)
+                                corners += 1       
+                            else:
+                                # Place according to orientation.  If orientation
+                                # is not standard, be sure to make it standard!
+                                placeo = self.rotate_orientation(placeo)
+                                if placeo == 'N':
+                                    Spadlist.append(newpad)
+                                elif placeo == 'E':
+                                    Wpadlist.append(newpad)
+                                elif placeo == 'S':
+                                    Npadlist.append(newpad)
+                                else:
+                                    Epadlist.append(newpad)
+
+                    elif 'END COMPONENTS' in line:
+                        in_components = False
+                elif 'COMPONENTS' in line:
+                    in_components = True
+
+            self.Npads = Npadlist
+            self.Wpads = Wpadlist
+            self.Spads = Spadlist
+            self.Epads = Epadlist
+
+            self.NWpad = NWpad
+            self.NEpad = NEpad
+            self.SWpad = SWpad
+            self.SEpad = SEpad
+
+        # The padframe has its own DEF file from the padring app, but the core
+        # does not.  The core needs to be floorplanned in a very similar manner.
+        # This will be done by searching for a DEF file of the project top-level
+        # layout.  If none exists, it is created by generating it from the layout.
+        # If the top-level layout does not exist, then all core cells are placed
+        # at the origin, and the origin placed at the padframe inside corner.
+
+        mag_path = self.projectpath + '/mag'
+        if not os.path.isfile(mag_path + '/' + self.project + '.def'):
+            if os.path.isfile(mag_path + '/' + self.project + '.mag'):
+
+                # Create a DEF file from the layout
+                with open(mag_path + '/pfg_write_def.tcl', 'w') as ofile:
+                    print('drc off', file=ofile)
+                    print('box 0 0 0 0', file=ofile)
+                    print('load ' + self.project, file=ofile)
+                    print('def write', file=ofile)
+                    print('quit', file=ofile)
+
+                magicexec = self.magic_path if self.magic_path else 'magic'
+                mproc = subprocess.Popen([magicexec, '-dnull', '-noconsole',
+			'pfg_write_def.tcl'],
+			stdin = subprocess.PIPE, stdout = subprocess.PIPE,
+			stderr = subprocess.PIPE, cwd = mag_path, universal_newlines = True)
+
+                self.watch(mproc)
+                os.remove(mag_path + '/pfg_write_def.tcl')
+
+            elif not os.path.isfile(mag_path + '/core.def'):
+
+                # With no other information available, copy the corecells
+                # (from the verilog file) into the coregroup list.
+                # Position all core cells starting at the padframe top left
+                # inside corner, and arranging in rows without overlapping.
+                # Note that no attempt is made to organize the cells or
+                # otherwise produce an efficient layout.  Any dimension larger
+                # than the current padframe overruns to the right or bottom.
+
+                if self.SWpad != []:
+                    corellx = SWpad[0]['x'] + SWpad[0]['width'] + self.margin
+                    corelly = SWpad[0]['y'] + SWpad[0]['height'] + self.margin
+                else:
+                    corellx = Wpadlist[0]['x'] + Wpadlist[0]['height'] + self.margin
+                    corelly = Spadlist[0]['x'] + Spadlist[0]['height'] + self.margin
+                if self.NEpad != []:
+                    coreurx = NEpad[0]['x'] - self.margin
+                    coreury = NEpad[0]['y'] - self.margin
+                else:
+                    coreurx = Epadlist[0]['x'] - self.margin
+                    coreury = Npadlist[0]['x'] - self.margin
+                locllx = corellx
+                testllx = corellx
+                loclly = corelly
+                testlly = corelly
+                nextlly = corelly
+              
+                for cell in self.corecells:
+
+                    testllx = locllx + cell['width']
+                    if testllx > coreurx:
+                        locllx = corellx
+                        corelly = nextlly
+                        loclly = nextlly
+
+                    newcore = cell
+                    newcore['x'] = locllx
+                    newcore['y'] = loclly
+                    newcore['o'] = 'N'
+
+                    locllx += cell['width'] + self.margin
+
+                    testlly = corelly + cell['height'] + self.margin
+                    if testlly > nextlly:
+                        nextlly = testlly
+
+                    coregroup.append(newcore)
+
+                self.coregroup = coregroup
+
+        if os.path.isfile(mag_path + '/' + self.project + '.def'):
+            # Read the top-level DEF, and use it to position the core cells.
+            self.print('Reading the top-level cell DEF for core cell placement.')
+
+            units = 1000
+            in_components = False
+            with open(mag_path + '/' + self.project + '.def', 'r') as ifile:
+                deflines = ifile.read().splitlines()
+                for line in deflines:
+                    if 'UNITS DISTANCE MICRONS' in line:
+                        units = line.split()[3]
+                    elif in_components:
+                        lparse = line.split()
+                        if lparse[0] == '-':
+                            instname = lparse[1]
+                            # NOTE: Magic should not drop the entire path to the
+                            # cell for the cellname;  this needs to be fixed!  To
+                            # work around it, remove any path components.
+                            cellpath = lparse[2]
+                            cellname = os.path.split(cellpath)[1]
+                        
+                        elif lparse[0] == '+':
+                            if lparse[1] == 'PLACED':
+                                placex = lparse[3]
+                                placey = lparse[4]
+                                placeo = lparse[6]
+
+                                newcore = {}
+                                newcore['name'] = instname
+                                newcore['cell'] = cellname
+
+                                try:
+                                    celldef = next(item for item in self.coredefs if item['name'] == cellname)
+                                except:
+                                    celldef = None
+                                else:
+                                    newcore['celllib'] = celldef['celllib']
+                                    newcore['width'] = celldef['width']
+                                    newcore['height'] = celldef['height']
+                                    newcore['class'] = celldef['class']
+                                    newcore['subclass'] = celldef['subclass']
+ 
+                                newcore['x'] = float(placex) / float(units)
+                                newcore['y'] = float(placey) / float(units)
+                                newcore['o'] = placeo
+                                coregroup.append(newcore)
+
+                        elif 'END COMPONENTS' in line:
+                            in_components = False
+                    elif 'COMPONENTS' in line:
+                        in_components = True
+
+            self.coregroup = coregroup
+
+        elif os.path.isfile(mag_path + '/core.def'):
+            # No DEF or .mag file, so fallback position is the last core.def
+            # file generated by this script.
+            self.read_core_def(precheck=precheck)
+
+        return True
+
+    # Read placement information from the "padframe.def" file and rotate
+    # all cells according to self.pad_rotation.  This accounts for the
+    # problem that the default orientation of pads is arbitrarily defined
+    # by the foundry, while padring assumes that the corner pad is drawn
+    # in the lower-left position and other pads are drawn with the pad at
+    # the bottom and the buses at the top.
+
+    def rotate_pads_in_def(self):
+        if self.pad_rotation == 0:
+            return
+
+        self.print('Rotating pads in padframe DEF file.')
+        mag_path = self.projectpath + '/mag'
+
+        if not os.path.isfile(mag_path + '/padframe.def'):
+            self.print('No file padframe.def:  Cannot modify pad rotations.')
+            return
+
+        deflines = []
+        with open(mag_path + '/padframe.def', 'r') as ifile:
+            deflines = ifile.read().splitlines()
+
+        outlines = []
+        in_components = False
+        for line in deflines:
+            if in_components:
+                lparse = line.split()
+                if lparse[0] == '+':
+                    if lparse[1] == 'PLACED':
+                        neworient = self.rotate_orientation(lparse[6])
+                        lparse[6] = neworient
+                        line = ' '.join(lparse)
+
+                elif 'END COMPONENTS' in line:
+                    in_components = False
+            elif 'COMPONENTS' in line:
+                in_components = True
+            outlines.append(line)
+
+        with open(mag_path + '/padframe.def', 'w') as ofile:
+            for line in outlines:
+                print(line, file=ofile)
+
+    # Read placement information from the DEF file for the core (created by
+    # a previous run of this script)
+
+    def read_core_def(self, precheck=False):
+        self.print('Reading placement information from core DEF file.')
+
+        mag_path = self.projectpath + '/mag'
+
+        if not os.path.isfile(mag_path + '/core.def'):
+            if not precheck:
+                self.print('No file core.def:  core placement was not generated.')
+            return False 
+
+        # Very simple DEF file parsing, similar to the padframe.def reading
+        # routine above.
+
+        units = 1000
+        in_components = False
+
+        coregroup = []
+
+        with open(mag_path + '/core.def', 'r') as ifile:
+            deflines = ifile.read().splitlines()
+            for line in deflines:
+                if 'UNITS DISTANCE MICRONS' in line:
+                    units = line.split()[3]
+                elif in_components:
+                    lparse = line.split()
+                    if lparse[0] == '-':
+                        instname = lparse[1]
+                        cellname = lparse[2]
+                        
+                    elif lparse[0] == '+':
+                        if lparse[1] == 'PLACED':
+                            placex = lparse[3]
+                            placey = lparse[4]
+                            placeo = lparse[6]
+
+                            newcore = {}
+                            newcore['name'] = instname
+                            newcore['cell'] = cellname
+
+                            try:
+                                celldef = next(item for item in self.coredefs if item['name'] == cellname)
+                            except:
+                                celldef = None
+                            else:
+                                newcore['celllib'] = celldef['celllib']
+                                newcore['width'] = celldef['width']
+                                newcore['height'] = celldef['height']
+                                newcore['class'] = celldef['class']
+                                newcore['subclass'] = celldef['subclass']
+ 
+                            newcore['x'] = float(placex) / float(units)
+                            newcore['y'] = float(placey) / float(units)
+                            newcore['o'] = placeo
+                            coregroup.append(newcore)
+
+                    elif 'END COMPONENTS' in line:
+                        in_components = False
+                elif 'COMPONENTS' in line:
+                    in_components = True
+
+            self.coregroup = coregroup
+
+        return True
+
+    # Save the layout to a Magic database file (to be completed)
+
+    def save(self):
+        self.print('Saving results in a magic layout database.')
+
+        # Generate a list of (unique) LEF libraries for all padframe and core cells
+        leflist = []
+        for pad in self.celldefs:
+            if pad['iolib'] not in leflist:
+                leflist.append(pad['iolib'])
+
+        for core in self.coredefs:
+            if core['celllib'] not in leflist:
+                leflist.append(core['celllib'])
+
+        # Run magic, and generate the padframe with a series of commands
+        mag_path = self.projectpath + '/mag'
+
+        with open(mag_path + '/pfg_write_mag.tcl', 'w') as ofile:
+            print('drc off', file=ofile)
+            print('box 0 0 0 0', file=ofile)
+            for leffile in leflist:
+                print('lef read ' + leffile, file=ofile)
+            print('def read padframe', file=ofile)
+            print('select top cell', file=ofile)
+            print('select area', file=ofile)
+            print('select save padframe', file=ofile)
+            print('delete', file=ofile)
+            print('def read core', file=ofile)
+            print('getcell padframe', file=ofile)
+            print('save ' + self.project, file=ofile)
+            print('writeall force ' + self.project, file=ofile)
+            print('quit', file=ofile)
+
+        magicexec = self.magic_path if self.magic_path else 'magic'
+        mproc = subprocess.Popen([magicexec, '-dnull', '-noconsole',
+			'pfg_write_mag.tcl'],
+			stdin = subprocess.PIPE, stdout = subprocess.PIPE,
+			stderr = subprocess.PIPE, cwd = mag_path, universal_newlines = True)
+        self.watch(mproc)
+        os.remove(mag_path + '/pfg_write_mag.tcl')
+        self.print('Done writing layout ' + self.project + '.mag')
+
+        # Write the core DEF file if it does not exist yet.
+        if not os.path.isfile(mag_path + '/core.def'):
+            self.write_core_def()
+
+if __name__ == '__main__':
+    faulthandler.register(signal.SIGUSR2)
+    options = []
+    arguments = []
+    for item in sys.argv[1:]:
+        if item.find('-', 0) == 0:
+            options.append(item)
+        else:
+            arguments.append(item)
+
+    if '-help' in options:
+        print(sys.argv[0] + ' [options]')
+        print('')
+        print('options:')
+        print('   -noc    Print output to terminal, not the gui window')
+        print('   -nog    No graphics, run in batch mode')
+        print('   -cfg    Use existing padframe.cfg, do not regenerate')
+        print('   -padring-path=<path>	path to padring executable')
+        print('   -magic-path=<path>	path to magic executable')
+        print('   -tech-path=<path>	path to tech root folder')
+        print('   -project-path=<path>	path to project root folder')
+        print('   -help   Print this usage information')
+        print('')
+        sys.exit(0)
+
+    root = tkinter.Tk()
+    do_gui = False if ('-nog' in options or '-nogui' in options) else True
+    app = SoCFloorplanner(root, do_gui)
+
+    # Allow option -noc to bypass the text-to-console redirection, so crash
+    # information doesn't disappear with the app.
+
+    app.use_console = False if ('-noc' in options or '-noconsole' in options) else True
+    if do_gui == False:
+        app.use_console = False
+
+    # efabless format can be specified on the command line, but note that it
+    # is otherwise auto-detected by checking for .config vs. .ef-config in
+    # the project space.
+
+    app.ef_format = True if '-ef_format' in options else False
+    app.keep_cfg = True if '-cfg' in options else False
+
+    app.padring_path = None
+    app.magic_path = None
+    app.techpath = None
+    app.projectpath = None
+
+    for option in options:
+        if option.split('=')[0] == '-padring-path':
+            app.padring_path = option.split('=')[1]
+        elif option.split('=')[0] == '-magic-path':
+            app.magic_path = option.split('=')[1]
+        elif option.split('=')[0] == '-tech-path':
+            app.techpath = option.split('=')[1]
+        elif option.split('=')[0] == '-project-path':
+            app.projectpath = option.split('=')[1]
+            app.projectpath = app.projectpath[:-1] if app.projectpath[-1] == '/' else app.projectpath
+
+    app.text_to_console()
+    app.init_padframe()
+    if app.do_gui:
+        root.mainloop()
+    else:
+        # Run 'save' in non-GUI mode
+        app.save()
+        sys.exit(0)
+
diff --git a/common/split_gds.py b/common/split_gds.py
new file mode 100755
index 0000000..a673c2f
--- /dev/null
+++ b/common/split_gds.py
@@ -0,0 +1,76 @@
+#!/bin/env python3
+# Script to read a GDS library and write into individual GDS files, one per cell
+
+import os
+import sys
+import subprocess
+
+def usage():
+    print('split_gds.py <path_to_gds_library> <magic_techfile> [<file_with_list_of_cells>]')
+
+if __name__ == '__main__':
+
+    if len(sys.argv) == 1:
+        print("No options given to split_gds.py.")
+        usage()
+        sys.exit(0)
+
+    optionlist = []
+    arguments = []
+
+    for option in sys.argv[1:]:
+        if option.find('-', 0) == 0:
+            optionlist.append(option)
+        else:
+            arguments.append(option)
+
+    if len(arguments) != 3:
+        print("Wrong number of arguments given to split_gds.py.")
+        usage()
+        sys.exit(0)
+
+    source = arguments[0]
+
+    techfile = arguments[1]
+
+    celllist = arguments[2]
+    if os.path.isfile(celllist):
+        with open(celllist, 'r') as ifile:
+            celllist = ifile.read().splitlines()
+
+    destdir = os.path.split(source)[0]
+    gdsfile = os.path.split(source)[1]
+
+    with open(destdir + '/split_gds.tcl', 'w') as ofile:
+        print('#!/bin/env wish', file=ofile)
+        print('drc off', file=ofile)
+        print('gds readonly true', file=ofile)
+        print('gds rescale false', file=ofile)
+        print('tech unlock *', file=ofile)
+        print('gds read ' + gdsfile, file=ofile)
+
+        for cell in celllist:
+            print('load ' + cell, file=ofile)
+            print('gds write ' + cell, file=ofile)
+
+        print('quit -noprompt', file=ofile)
+
+    mproc = subprocess.run(['magic', '-dnull', '-noconsole',
+		'-T', techfile,
+		destdir + '/split_gds.tcl'],
+		stdin = subprocess.DEVNULL,
+		stdout = subprocess.PIPE,
+		stderr = subprocess.PIPE, cwd = destdir,
+		universal_newlines = True)
+    if mproc.stdout:
+        for line in mproc.stdout.splitlines():
+            print(line)
+    if mproc.stderr:
+        print('Error message output from magic:')
+        for line in mproc.stderr.splitlines():
+            print(line)
+        if mproc.returncode != 0:
+            print('ERROR:  Magic exited with status ' + str(mproc.returncode))
+
+    os.remove(destdir + '/split_gds.tcl')
+    exit(0)
diff --git a/common/staging_install.py b/common/staging_install.py
new file mode 100755
index 0000000..3e13cb0
--- /dev/null
+++ b/common/staging_install.py
@@ -0,0 +1,617 @@
+#!/usr/bin/env python3
+#
+# staging_install.py
+#
+# This file copies the staging area created by foundry_install.py
+# into the target directory area, changing paths to match the target,
+# and creating symbolic links where requested and allowed.
+#
+# Options:
+#    -link_from <type>	Make symbolic links to vendor files from target
+#			Types are: "none", "source", or a PDK name.
+#			Default "none" (copy all files from source)
+#    -ef_format		Use efabless naming (libs.ref/techLEF),
+#			otherwise use generic naming (libs.tech/lef)
+#
+#    -staging <path>	Path to staging top level directory
+#    -target <path>	Path to target top level directory
+#    -local <path>	For distributed installs, this is the local
+#			path to target top level directory.
+#    -source <path>     Path to original source top level directory,
+#                       if link_from is "source".  This option may
+#                       be called multiple times if there are multiple
+#                       sources.
+
+import re
+import os
+import sys
+import glob
+import stat
+import shutil
+import filecmp
+import subprocess
+
+# NOTE:  This version of copy_tree from distutils works like shutil.copytree()
+# in Python 3.8 and up ONLY using "dirs_exist_ok=True".
+from distutils.dir_util import copy_tree
+
+def usage():
+    print("staging_install.py [options...]")
+    print("   -link_from <name> Make symbolic links from target to <name>")
+    print("                     where <name> can be 'source' or a PDK name.")
+    print("                     Default behavior is to copy all files.")
+    print("   -copy             Copy files from source to target (default)")
+    print("   -ef_format        Use efabless naming conventions for local directories")
+    print("")
+    print("   -staging <path>   Path to top of staging directory tree")
+    print("   -target <path>    Path to top of target directory tree")
+    print("   -local <path>	Local path to top of target directory tree for distributed install")
+    print("")
+    print(" If <target> is unspecified then <name> is used for the target.")
+
+def makeuserwritable(filepath):
+    if os.path.exists(filepath):
+        st = os.stat(filepath)
+        os.chmod(filepath, st.st_mode | stat.S_IWUSR)
+
+# Filter files to replace all strings matching "stagingdir" with "localdir" for
+# every file in "tooldir".  If "tooldir" contains subdirectories, then recursively
+# apply the replacement filter to all files in the subdirectories.  Do not follow
+# symbolic links.
+
+def filter_recursive(tooldir, stagingdir, localdir):
+    gdstypes = ['.gds', '.gds2', '.gdsii']
+
+    if not os.path.exists(tooldir):
+        return 0
+    elif os.path.islink(tooldir):
+        return 0
+
+    toolfiles = os.listdir(tooldir)
+    total = 0
+
+    for file in toolfiles:
+        # Do not attempt to do text substitutions on a binary file!
+        if os.path.splitext(file)[1] in gdstypes:
+            continue
+
+        filepath = tooldir + '/' + file
+        if os.path.islink(filepath):
+            continue
+        elif os.path.isdir(filepath):
+            total += filter_recursive(filepath, stagingdir, localdir)
+        else:
+            with open(filepath, 'r') as ifile:
+                try:
+                    flines = ifile.read().splitlines()
+                except UnicodeDecodeError:
+                    print('Failure to read file ' + filepath + '; non-ASCII content.')
+                    continue
+
+            # Make sure this file is writable (as the original may not be)
+            makeuserwritable(filepath)
+
+            modified = False
+            with open(filepath, 'w') as ofile:
+                for line in flines:
+                    newline = line.replace(stagingdir, localdir)
+                    print(newline, file=ofile) 
+                    if newline != line:
+                        modified = True
+
+            if modified:
+                total += 1
+    return total
+        
+# To avoid problems with various library functions that copy hierarchical
+# directory trees, remove all the files from the target that are going to
+# be replaced by the contents of staging.  This avoids problems with
+# symbolic links and such.
+
+def remove_target(stagingdir, targetdir):
+
+    slist = os.listdir(stagingdir)
+    tlist = os.listdir(targetdir)
+
+    for sfile in slist:
+        if sfile in tlist:
+            tpath = targetdir + '/' + sfile
+            if os.path.islink(tpath):
+                os.unlink(tpath)
+            elif os.path.isdir(tpath):
+                remove_target(stagingdir + '/' + sfile, targetdir + '/' + sfile)
+            else:
+                os.remove(tpath)
+
+# Create a list of source files/directories from the contents of source.txt
+
+def make_source_list(sources):
+    sourcelist = []
+    for source in sources:
+        sourcelist.extend(glob.glob(source))
+    return sourcelist
+
+# Replace all files in list "libfiles" with symbolic links to files in
+# "sourcelist", where the files are found to be the same.  If the entry
+# in "libfiles" is a directory and the same directory is found in "sourcelist",
+# then repeat recursively on the subdirectory.
+#
+# Because the installation may be distributed, there may be a difference
+# between where the files to be linked to currently are (checklist)
+# and where they will eventually be located (sourcelist).
+
+def replace_with_symlinks(libfiles, sourcelist):
+    # List of files that never get installed
+    exclude = ['generate_magic.tcl', '.magicrc', 'sources.txt']
+    total = 0
+    for libfile in libfiles:
+        if os.path.islink(libfile):
+            continue
+        else:
+            try:
+                sourcefile = next(item for item in sourcelist if os.path.split(item)[1] == os.path.split(libfile)[1])
+            except:
+                pass
+            else:
+                if os.path.isdir(libfile):
+                    newlibfiles = glob.glob(libfile + '/*')
+                    newsourcelist = glob.glob(sourcefile + '/*')
+                    total += replace_with_symlinks(newlibfiles, newsourcelist)
+                elif filecmp.cmp(libfile, sourcefile):
+                    if not os.path.split(libfile)[1] in exclude:
+                        os.remove(libfile)
+                        # Use absolute path for the source file
+                        sourcepath = os.path.abspath(sourcefile)
+                        os.symlink(sourcepath, libfile)
+                        total += 1
+    return total
+
+# Similar to the routine above, replace files in "libdir" with symbolic
+# links to the files in "srclibdir", where the files are found to be the
+# same.  The difference from the routine above is that "srclibdir" is
+# another installed PDK, and so the directory hierarchy is expected to
+# match that of "libdir" exactly, so the process of finding matches is
+# a bit more straightforward.
+#
+# Because the installation may be distributed, there may be a difference
+# between where the files to be linked to currently are (checklibdir)
+# and where they will eventually be located (srclibdir).
+
+def replace_all_with_symlinks(libdir, srclibdir, checklibdir):
+    total = 0
+    try:
+        libfiles = os.listdir(libdir)
+    except FileNotFoundError:
+        print('Cannot list directory ' + libdir)
+        print('Called: replace_all_with_symlinks(' + libdir + ', ' + srclibdir + ', ' + checklibdir + ')')
+        return total
+
+    try:
+        checkfiles = os.listdir(checklibdir)
+    except FileNotFoundError:
+        print('Cannot list check directory ' + checklibdir)
+        print('Called: replace_all_with_symlinks(' + libdir + ', ' + srclibdir + ', ' + checklibdir + ')')
+        return total
+
+    for libfile in libfiles:
+        if libfile in checkfiles:
+            libpath = libdir + '/' + libfile
+            checkpath = checklibdir + '/' + libfile
+            srcpath = srclibdir + '/' + libfile
+
+            if os.path.isdir(libpath):
+                if os.path.isdir(checkpath):
+                    total += replace_all_with_symlinks(libpath, srcpath, checkpath)
+            else:
+                try:
+                    if filecmp.cmp(libpath, checkpath):
+                        os.remove(libpath)
+                        os.symlink(srcpath, libpath)
+                        total += 1
+                except FileNotFoundError:
+                    print('Failed file compare with libpath=' + libpath + ', checkpath=' + checkpath)
+
+    return total
+
+#----------------------------------------------------------------
+# This is the main entry point for the staging install script.
+#----------------------------------------------------------------
+
+if __name__ == '__main__':
+
+    if len(sys.argv) == 1:
+        print("No options given to staging_install.py.")
+        usage()
+        sys.exit(0)
+    
+    optionlist = []
+    newopt = []
+
+    stagingdir = None
+    targetdir = None
+    link_from = None
+    localdir = None
+
+    ef_format = False
+    do_install = True
+
+    # Break arguments into groups where the first word begins with "-".
+    # All following words not beginning with "-" are appended to the
+    # same list (optionlist).  Then each optionlist is processed.
+    # Note that the first entry in optionlist has the '-' removed.
+
+    for option in sys.argv[1:]:
+        if option.find('-', 0) == 0:
+            if newopt != []:
+                optionlist.append(newopt)
+                newopt = []
+            newopt.append(option[1:])
+        else:
+            newopt.append(option)
+
+    if newopt != []:
+        optionlist.append(newopt)
+
+    # Check for option "ef_format" or "std_format"
+    for option in optionlist[:]:
+        if option[0] == 'ef_naming' or option[0] == 'ef_names' or option[0] == 'ef_format':
+            optionlist.remove(option)
+            ef_format = True
+        elif option[0] == 'std_naming' or option[0] == 'std_names' or option[0] == 'std_format':
+            optionlist.remove(option)
+            ef_format = False
+        elif option[0] == 'uninstall':
+            optionlist.remove(option)
+            do_install = False
+
+    # Check for options "link_from", "staging", "target", and "local"
+
+    link_name = None
+    for option in optionlist[:]:
+        if option[0] == 'link_from':
+            optionlist.remove(option)
+            if option[1].lower() == 'none':
+                link_from = None
+            elif option[1].lower() == 'source':
+                link_from = 'source'
+            else:
+                link_from = option[1]
+                link_name = os.path.split(link_from)[1]
+        elif option[0] == 'staging' or option[0] == 'source':
+            optionlist.remove(option)
+            stagingdir = option[1]
+        elif option[0] == 'target':
+            optionlist.remove(option)
+            targetdir = option[1]
+        elif option[0] == 'local':
+            optionlist.remove(option)
+            localdir = option[1]
+
+    # Error if no staging or dest specified
+    if not stagingdir:
+        print("No staging directory specified.  Exiting.")
+        sys.exit(1)
+
+    if not targetdir:
+        print("No target directory specified.  Exiting.")
+        sys.exit(1)
+
+    # If localdir is not specified, then it is the same as the parent
+    # of the target (local installation assumed)
+    if not localdir:
+        localdir = targetdir
+
+    # Take the target PDK name from the target path last component
+    pdkname = os.path.split(targetdir)[1]
+
+    # If link source is a PDK name, if it has no path, then pull the
+    # path from the target name.
+
+    if link_from:
+        if link_from != 'source':
+            if link_from.find('/', 0) < 0:
+                link_name = link_from
+                link_from = os.path.split(localdir)[0] + '/' + link_name
+        else:
+            # If linking from source, convert the source path to an
+            # absolute pathname.
+            stagingdir = os.path.abspath(stagingdir)
+
+    # If link_from is the same as localdir, then set link_from to None
+    if link_from == localdir:
+        link_from = None
+
+    # checkdir is the DIST target directory for the PDK pointed
+    # to by link_name.  Files must be found there before creating
+    # symbolic links to the (not yet existing) final install location.
+
+    if link_name:
+        checkdir = os.path.split(targetdir)[0] + '/' + link_name
+    else:
+        checkdir = ''
+
+    # Diagnostic
+    if do_install:
+        print("Installing in target directory " + targetdir)
+    else:
+        print("Uninstalling from target directory " + targetdir)
+        print("(Method not yet implemented)")
+
+    # Create the top-level directories
+
+    os.makedirs(targetdir, exist_ok=True)
+    os.makedirs(targetdir + '/libs.tech', exist_ok=True)
+    os.makedirs(targetdir + '/libs.ref', exist_ok=True)
+    if os.path.isdir(stagingdir + '/libs.priv'):
+        os.makedirs(targetdir + '/libs.priv', exist_ok=True)
+        has_priv = True
+    else:
+        has_priv = False
+
+    # Path to magic techfile depends on ef_format
+
+    if ef_format == True:
+        mag_current = '/libs.tech/magic/current/'
+    else:
+        mag_current = '/libs.tech/magic/'
+
+    # First install everything by direct copy.  Keep the staging files
+    # as they will be used to reference the target area to know which
+    # files need to be checked and/or modified.
+
+    if not os.path.isdir(targetdir):
+        try:
+            os.makedirs(targetdir, exist_ok=True)
+        except:
+            print('Fatal error:  Cannot make target directory ' + targetdir + '!')
+            exit(1)
+
+    # Remove any files from the target directory that are going to be replaced
+    print('Removing files from target')
+    remove_target(stagingdir, targetdir)
+
+    print('Copying staging files to target')
+    # print('Diagnostic:  copy_tree ' + stagingdir + ' ' + targetdir)
+    copy_tree(stagingdir, targetdir, preserve_symlinks=True)
+    print('Done.')
+
+    # Magic and qflow setup files have references to the staging area that have
+    # been used by the vendor install;  these need to be changed to the target
+    # directory.
+
+    print('Changing local path references from ' + stagingdir + ' to ' + localdir)
+    print('Part 1:  Tools')
+
+    needcheck = ['ngspice']
+    techdirs = ['/libs.tech/']
+    if has_priv:
+        techdirs.append('/libs.priv/')
+
+    for techdir in techdirs:
+        tools = os.listdir(targetdir + techdir)
+        for tool in tools:
+            tooldir = targetdir + techdir + tool
+
+            # There are few enough tool setup files that they can just all be
+            # filtered directly.  This code only looks in the directory 'tooldir'.
+            # If there are files is subdirectories of 'tooldir' that require
+            # substitution, then this code needs to be revisited.
+
+            # Note that due to the low overhead of tool setup files, there is
+            # no attempt to check for possible symlinks to link_from if link_from
+            # is a base PDK.
+
+            total = filter_recursive(tooldir, stagingdir, localdir)
+            if total > 0:
+                substr = 'substitutions' if total > 1 else 'substitution'
+                print('      ' + tool + ' (' + str(total) + ' ' + substr + ')')
+
+    # If "link_from" is another PDK, then check all files against the files in
+    # the other PDK, and replace the file with a symbolic link if the file contents
+    # match (Note:  This is done only for ngspice model files;  other tool files are
+    # generally small and deemed unnecessary to make symbolic links).
+
+    if link_from != 'source':
+        thispdk = os.path.split(targetdir)[1]
+
+        # Only create links for PDKs other than the one we are making links to.
+        if thispdk != link_from:
+            print('Replacing files with symbolic links to ' + link_from + ' where possible.')
+            for techdir in techdirs:
+                for tool in needcheck:
+                    tooldir = targetdir + techdir + tool
+                    srctooldir = link_from + techdir + tool
+                    if checkdir != '':
+                        checktooldir = checkdir + techdir + tool
+                    else:
+                        checktooldir = srctooldir
+                    if os.path.exists(tooldir):
+                        total = replace_all_with_symlinks(tooldir, srctooldir, checktooldir)
+                        if total > 0:
+                            symstr = 'symlinks' if total > 1 else 'symlink'
+                            print('      ' + tool + ' (' + str(total) + ' ' + symstr + ')')
+
+    # In .mag files in mag/ and maglef/, also need to change the staging
+    # directory name to localdir
+
+    needcheck = ['mag', 'maglef']
+    refdirs = ['/libs.ref/']
+    if has_priv:
+        refdirs.append('/libs.priv/')
+
+    if ef_format:
+        print('Part 2:  Formats')
+        for refdir in refdirs:
+            for filetype in needcheck:
+                print('   ' + filetype)
+                filedir = targetdir + refdir + filetype
+                if os.path.isdir(filedir):
+                    libraries = os.listdir(filedir)
+                    for library in libraries:
+                        libdir = filedir + '/' + library
+                        total = filter_recursive(libdir, stagingdir, localdir)
+                        if total > 0:
+                            substr = 'substitutions' if total > 1 else 'substitution'
+                            print('      ' + library + ' (' + str(total) + ' ' + substr + ')')
+    else:
+        print('Part 2:  Libraries')
+        for refdir in refdirs:
+            libraries = os.listdir(targetdir + refdir)
+            for library in libraries:
+                print('   ' + library)
+                for filetype in needcheck:
+                    filedir = targetdir + refdir + library + '/' + filetype
+                    total = filter_recursive(filedir, stagingdir, localdir)
+                    if total > 0:
+                        substr = 'substitutions' if total > 1 else 'substitution'
+                        print('      ' + filetype + ' (' + str(total) + ' ' + substr + ')')
+        
+    # If "link_from" is "source", then check all files against the source
+    # directory, and replace the file with a symbolic link if the file
+    # contents match.  The "foundry_install.py" script should have added a
+    # file "sources.txt" with the name of the source directories for each
+    # install directory.
+
+    if link_from == 'source':
+        print('Replacing files with symbolic links to source where possible.')
+        for refdir in refdirs:
+            if ef_format:
+                filedirs = os.listdir(targetdir + refdir)
+                for filedir in filedirs:
+                    print('   ' + filedir)
+                    dirpath = targetdir + refdir + filedir
+                    if os.path.isdir(dirpath):
+                        libraries = os.listdir(dirpath)
+                        for library in libraries:
+                            libdir = targetdir + refdir + filedir + '/' + library
+                            libfiles = os.listdir(libdir)
+                            if 'sources.txt' in libfiles:
+                                libfiles = glob.glob(libdir + '/*')
+                                libfiles.remove(libdir + '/sources.txt')
+                                with open(libdir + '/sources.txt') as ifile:
+                                    sources = ifile.read().splitlines()
+                                sourcelist = make_source_list(sources)
+                                total = replace_with_symlinks(libfiles, sourcelist)
+                                if total > 0:
+                                    symstr = 'symlinks' if total > 1 else 'symlink'
+                                    print('      ' + library + ' (' + str(total) + ' ' + symstr + ')')
+            else:
+                libraries = os.listdir(targetdir + refdir)
+                for library in libraries:
+                    print('   ' + library)
+                    filedirs = os.listdir(targetdir + refdir + library)
+                    for filedir in filedirs:
+                        libdir = targetdir + refdir + library + '/' + filedir
+                        if os.path.isdir(libdir):
+                            libfiles = os.listdir(libdir)
+                            if 'sources.txt' in libfiles:
+                                # List again, but with full paths.
+                                libfiles = glob.glob(libdir + '/*')
+                                libfiles.remove(libdir + '/sources.txt')
+                                with open(libdir + '/sources.txt') as ifile:
+                                    sources = ifile.read().splitlines()
+                                sourcelist = make_source_list(sources)
+                                total = replace_with_symlinks(libfiles, sourcelist)
+                                if total > 0:
+                                    symstr = 'symlinks' if total > 1 else 'symlink'
+                                    print('      ' + filedir + ' (' + str(total) + ' ' + symstr + ')')
+
+    # Otherwise, if "link_from" is another PDK, then check all files against
+    # the files in the other PDK, and replace the file with a symbolic link
+    # if the file contents match.
+
+    elif link_from:
+        thispdk = os.path.split(targetdir)[1]
+
+        # Only create links for PDKs other than the one we are making links to.
+        if thispdk != link_from:
+
+            print('Replacing files with symbolic links to ' + link_from + ' where possible.')
+
+            for refdir in refdirs:
+                if ef_format:
+                    filedirs = os.listdir(targetdir + refdir)
+                    for filedir in filedirs:
+                        print('   ' + filedir)
+                        dirpath = targetdir + refdir + filedir
+                        if os.path.isdir(dirpath):
+                            libraries = os.listdir(dirpath)
+                            for library in libraries:
+                                libdir = targetdir + refdir + filedir + '/' + library
+                                srclibdir = link_from + refdir + filedir + '/' + library
+                                if checkdir != '':
+                                    checklibdir = checkdir + refdir + filedir + '/' + library
+                                else:
+                                    checklibdir = srclibdir
+                                if os.path.exists(libdir):
+                                    total = replace_all_with_symlinks(libdir, srclibdir, checklibdir)
+                                    if total > 0:
+                                        symstr = 'symlinks' if total > 1 else 'symlink'
+                                        print('      ' + library + ' (' + str(total) + ' ' + symstr + ')')
+                else:
+                    libraries = os.listdir(targetdir + refdir)
+                    for library in libraries:
+                        print('   ' + library)
+                        filedirs = os.listdir(targetdir + refdir + library)
+                        for filedir in filedirs:
+                            libdir = targetdir + refdir + library + '/' + filedir
+                            srclibdir = link_from + refdir + library + '/' + filedir
+                            if checkdir != '':
+                                checklibdir = checkdir + refdir + library + '/' + filedir
+                            else:
+                                checklibdir = srclibdir
+                            if os.path.exists(libdir):
+                                total = replace_all_with_symlinks(libdir, srclibdir, checklibdir)
+                                if total > 0:
+                                    symstr = 'symlinks' if total > 1 else 'symlink'
+                                    print('      ' + filedir + ' (' + str(total) + ' ' + symstr + ')')
+
+    # Remove temporary files:  Magic generation scripts, sources.txt
+    # file, and magic extract files.
+
+    print('Removing temporary files from destination.')
+
+    for refdir in refdirs:
+        if ef_format:
+            filedirs = os.listdir(targetdir + refdir)
+            for filedir in filedirs:
+                if os.path.islink(filedir):
+                    continue
+                elif os.path.isdir(filedir):
+                    libraries = os.listdir(targetdir + refdir + filedir)
+                    for library in libraries:
+                        libdir = targetdir + refdir + filedir + '/' + library
+                        libfiles = os.listdir(libdir)
+                        for libfile in libfiles:
+                            filepath = libdir + '/' + libfile
+                            if os.path.islink(filepath):
+                                continue
+                            elif libfile == 'sources.txt':
+                                os.remove(filepath)
+                            elif libfile == 'generate_magic.tcl':
+                                os.remove(filepath)
+                            elif os.path.splitext(libfile)[1] == '.ext':
+                                os.remove(filepath)
+        else:
+            libraries = os.listdir(targetdir + refdir)
+            for library in libraries:
+                filedirs = os.listdir(targetdir + refdir + library)
+                for filedir in filedirs:
+                    filepath = targetdir + refdir + library + '/' + filedir
+                    if os.path.islink(filepath):
+                        continue
+                    elif os.path.isdir(filepath):
+                        libfiles = os.listdir(filepath)
+                        for libfile in libfiles:
+                            libfilepath = filepath + '/' + libfile
+                            if os.path.islink(libfilepath):
+                                continue
+                            elif libfile == 'sources.txt':
+                                os.remove(libfilepath)
+                            elif libfile == 'generate_magic.tcl':
+                                os.remove(libfilepath)
+                            elif os.path.splitext(libfile)[1] == '.ext':
+                                os.remove(libfilepath)
+        
+    print('Done with PDK migration.')
+    sys.exit(0)
diff --git a/common/tksimpledialog.py b/common/tksimpledialog.py
new file mode 100755
index 0000000..4734291
--- /dev/null
+++ b/common/tksimpledialog.py
@@ -0,0 +1,86 @@
+#!/usr/bin/env python3
+#
+# Dialog class for tkinter
+
+import os
+import tkinter
+from tkinter import ttk
+
+class Dialog(tkinter.Toplevel):
+
+    def __init__(self, parent, message = None, title = None, seed = None, border = 'blue', **kwargs):
+
+        tkinter.Toplevel.__init__(self, parent)
+        self.transient(parent)
+
+        if title:
+            self.title(title)
+
+        self.configure(background=border, padx=2, pady=2)
+        self.obox = ttk.Frame(self)
+        self.obox.pack(side = 'left', fill = 'both', expand = 'true')
+
+        self.parent = parent
+        self.result = None
+        body = ttk.Frame(self.obox)
+        self.initial_focus = self.body(body, message, seed, **kwargs)
+        body.pack(padx = 5, pady = 5)
+        self.buttonbox()
+        self.grab_set()
+
+        if not self.initial_focus:
+            self.initial_focus = self
+
+        self.protocol("WM_DELETE_WINDOW", self.cancel)
+        self.geometry("+%d+%d" % (parent.winfo_rootx() + 50,
+                                  parent.winfo_rooty() + 50))
+
+        self.initial_focus.focus_set()
+        self.wait_window(self)
+
+    # Construction hooks
+
+    def body(self, master, **kwargs):
+        # Create dialog body.  Return widget that should have
+        # initial focus.  This method should be overridden
+        pass
+
+    def buttonbox(self):
+        # Add standard button box.  Override if you don't want the
+        # standard buttons
+
+        box = ttk.Frame(self.obox)
+
+        self.okb = ttk.Button(box, text="OK", width=10, command=self.ok, default='active')
+        self.okb.pack(side='left', padx=5, pady=5)
+        w = ttk.Button(box, text="Cancel", width=10, command=self.cancel)
+        w.pack(side='left', padx=5, pady=5)
+
+        self.bind("<Return>", self.ok)
+        self.bind("<Escape>", self.cancel)
+        box.pack(fill='x', expand='true')
+
+    # Standard button semantics
+
+    def ok(self, event=None):
+
+        if not self.validate():
+            self.initial_focus.focus_set() # put focus back
+            return
+
+        self.withdraw()
+        self.update_idletasks()
+        self.result = self.apply()
+        self.cancel()
+
+    def cancel(self, event=None):
+
+        # Put focus back to the parent window
+        self.parent.focus_set()
+        self.destroy()
+
+    def validate(self):
+        return 1 # Override this
+
+    def apply(self):
+        return None # Override this