In and not in my head March 27, 2015Posted by ficial in brain dump.
add a comment
One of the things I find especially challenging is pushing things from the world inside my mind to the world outside (the so-called ‘real world’). I grapple with understanding the factors that affect the difficulty of enacting that transition (I’m guessing most people are always dealing with that one way or another, if not always so explicitly). The, for want of a better word, size of something is sometimes an issue, but at other times seems to be irrelevant. In some instances it feels easy to build, enact, enable, do for long duration and at large scale to make whatever is in my mind exist outside of it. In other cases simply writing or speaking even a single sentence is insurmountable, and not due to any particular content of the sentence – it’s just that the energy, or whatever, required to breach reality’s barrier is beyond what I can achieve.
One of the reasons that I like working with / on computers is that (for me) at times they make that process at least a bit easier, though certainly not always nor for everything. I also find talking and working with people can be similar for me in this respect – in addition to new insights, viewpoints, skills, load-sharing, etc. that other people bring, the active presence of and interaction with other people makes it easier for me to make things ‘real’. The word ‘real’ is in quotes there because it’s not quite the word I want, but I can’t find any closer one. Whatever-it-is is already real in my mind. All that making it ‘really real’ does is allow others (people or things, sentient or otherwise) to experience/use/understand/interact-with it as well.
There are many directions this sort of thinking can go: communication is/as action/creation and vice versa, the observer-influenced/determined nature of reality, the limits and conditions of transitions, the relation between stories and art and math, energy gradients and tunneling, memes and the noosphere, mind-body duality as a limited view on a line that goes further in both directions, the characteristics and ‘cardinalities’ of multiverses, and so on …but hauling more of it out and/or in any kind of detail is more than I can do right now). My questions / discussion points for the moment are: why is it sometimes so easy and sometimes so hard, and what can I (or, more to the point, anyone) do change/control that?
Academic/research software, and rogue-like game stuff January 12, 2015Posted by ficial in computer games, games, open source, software, techy.
add a comment
This blog is still live, just… sporadic :)
I’ve wrapped up a couple of projects relatively recently that might be of general interest. Both are java applications for pretty specific areas of study. The first is actually a work project, while the second is a personal one.
GeoShear (on GitHub and at Williams) is an application to aid structural geology research and teaching. It models shearing deformations (simple and pure), providing both an interactive visual interface and exportable quantitative data. It was created in collaboration with Charles L. MacMillan Professor of Natural Sciences Paul Karabinos as a part of NSF grant 0942313 – “Visualizing Strain in Rocks with Interactive Computer Programs”. In brief, it lets you mark a set of ellipses (representing cross-sections of pebbles) and then apply a shearing transformation to them. Charts of the pebble attributes are updated as shear is applied, so you can easily see the connection between them. The file format used is a simple tab-delimited one, so data can be entered in a spreadsheet if desired, and/or a spreadsheet can be used for further work post-deformation.
TideMiner is a smaller, simpler tool (also available on GitHub) that’s used to calculate the flooding frequency and duration of one or more given elevations, with tide levels from tide station data from NOAA. It works best with intervals of an hour or less. Basically, get your tide data from NOAA, load it into TideMiner, type or paste in the elevations of concern, and click the analyze button. The results can be saved in a tab-delimited format or copied and pasted directly into a spreadsheet for further work. This one is personal both in that it was not for work and in that I created it as a gift for my father (and the larger saltmarsh ecology community).
Islandora 7 – permissions for multisite and XACML June 30, 2014Posted by ficial in islandora, multisite, techy, XACML.
We used drupal-multisite to organize our repository. Permissions/access is a challenge. We used namespaces that are unique to each collection for site-level access, which required a bit of custom coding to support and has some limitations, but took much less coding and is more stable than the other main option we considered (implementing real per-collection access). Supporting XACML effectively across multiple sites then requires separate user tables for each site (backed by LDAP to unify login credentials), a separate entry in the fedora filter-drupal.xml file, and appropriate privileges granted on the DB for that site for the mysql user that filter-drupal specifies. Setting up a new site has three separate areas that require action:
- drupal : create the site using standard multi-site approach (share ldap_authorization and ldap_servers tables)
- mysql : grant access to the new database to the user that fedora uses to check credentials
- fedora : add an entry for the new site/DB to …/server/config/filter-drupal.xml, then restart fedora
In planning our repository a major challenge was for us to present and to control access to and mangement of our objects in a way that more-or-less matches how our users want and need things to work. Conceptually, our system has a tree-like structure, with potential for cross-connections. At the root is our over-all site http://unbound.williams.edu, which provides a face for the program / system as a whole and a convenient place / way to search across all the (unrestricted) objects in our system. From there we have project sites, which correspond to a particular department (e.g. http://unbound.williams.edu/williamsarchives), institutional project (http://unbound.williams.edu/facultypublications), or individual project (http://unbound.williams.edu/mayamotuldesanjosearchaeology). Within a given project there might be a single collection or multiple collections. A person or department might in turn have a single project or multiple projects, or might be involved in different projects and/or collection in different ways (e.g. managing one project, contributing to another collection, with read-only to a third, protected collection).
Setting up the technical infrastructure and processes to support the above model was (and continues to be) challenging. We used a drupal multi-site system to organize the main site and project sites. We leveraged islandora’s built-in namespace restriction capabilities to limit given collections to given sites. We did this by associating each collection with a unique namespace. This allows us to very easily include a given collection in multiple projects (e.g. the faculty articles collection might be in both the faculty publications project and the archives project). Essentially, we wanted to be able to support object access on a per-collection basis, but the built-in support only worked with namespaces, so we made them (semi-) synonymous. There were a couple of technical challenges to making this work, and there are also some less-than-ideal limitations that go with this approach.
On the technical side, there are two places that namespace restrictions come in to play: repository access and search access. On the back end there seems to be no limit to the number of namespaces that can be specified for these two areas, but the web form elements that are used for them limit the content to something too small for our purposes. We went through two levels of work-around here. First, we changed the form elements for those field from basic inputs to text area / paragraph inputs. However, we still had the problem that there were two separate places where namespaces had to be managed, which could easily lead to problems that would greatly impact user experience. So, we created a custom module that provides a single interface that controls both areas – the namespace list that’s entered in that one field is used to set both the SOLR preferences and the site namespace config values. With this in place our namespace list for a given site might become pretty long, but it’s easy enough to manage and we don’t ever end up in a situation where there’s a mis-match between the search-based access and the repository/site-based access.
On the data structure side of things this approach creates some hard limits in what we can do. We’re trying to emulate collection-based access control, but this doesn’t do that exactly. It fails in two main ways. First, an object’s namespace isn’t necessarily the same as that of a collection that contains that object. In the case when an object is in more than one collection then we’re guaranteed that there’s a mis-match for at least one of the collections. To try to get around this we more finely divide our object sets than we otherwise might and use the site-level grouping to bring them together rather than collection-level grouping. Second, we lose hierarchical object access control. In a pure collection based approach we would be able to nest collections and specify access by the top-level collection, but since each collection is it’s own namespace we have to manually manage access to whole hierarchies as individual elements. Neither of those two limitations are game-stoppers, but they do need to be taken into consideration when ingesting a new set of objects and setting up new projects an collections.
In an ideal world we’d have used collection membership directly for access control, but doing so would have required rather a lot of custom coding to implement. Essentially we’d have had to create a whole new set of fields and corresponding web forms that paralleled the namespace ones. Additionally, to make hierarchical collection membership work appropriately we’d have to get tangled in building and maintaining additional relationship fields in the RELS_EXT datastream. All certainly possible, but in our situation it required too much work and was too prone to implementation errors. We deliberately sacrificed functionality to gain stability and low technical investment and upkeep. So far it’s working OK for us.
Though we’re using namespaces as the primary way of associating given collections with given sites we still have the challenge of restricting access to collections (and indovidual objects) within a site via XACML. There are some subtleties in this due to how fedora checks permissions. Essentially, fedora has a component that checks in with the drupal database to verify that a user is authenticated and to check what roles the user has. This is explained briefly in the ‘Configure the Drupal Servlet Filter’ section at https://wiki.duraspace.org/pages/viewpage.action?pageId=34638844, with a very general directive to “use the Drupal LDAP module” to avoid difficulties in too-much-access. Making all that actually work required a certain amount of further research and experimentation for us.
We use LDAP for our central authentication system, and connect to it for our islandora system using LDAP for drupal 7. That package has a lot of sub-pieces, only three of which we found necessary to get things working: LDAP Authentication, LDAP Authorization, and LDAP Authorization – Drupal Roles (though one could probably get away with just the first). Once that’s set up for our main site we can simplify spinning up additional sites by sharing two key tables across the sites: ldap_authorization and ldap_servers. The modules still need to be enabled for new sites, but since the tables are shared no additional configuration is needed. Additionally, if our LDAP config needs to be changed then doing it once automatically ensures it works for all the sites. We do the table sharing by setting up one drupal as the primary install (in our case it’s our main site, using a database named main_drupal) and using the prefix attribute of the databases settings variable in the individual site settings files. (see below for an example)
We originally shared the user tables as well, but that caused serious problems when trying to use XACML to control object access by role. The fedora component that checks in with drupal about user validation and roles has an interesting behaviour where it combines all roles that a given username-password combination has across all sites. So, with a single, shared user table a user effectively has the same username and password for all sites, which means that the user would get for all sites any role they have on any site. In other words, making a user admin on one site would given them admin access to all objects on all sites. So, we have separate user tables. However, because we’re using LDAP as our authentication system it doesn’t impact user management – all the user management happens external to drupal anyway.
However, since we’re using seperate databases to hold all those different user tables (and other site-specific stuff, of course) the db user that fedora uses to check user authentication and roles need to be given access to those databases. One could create a user and given them universal grants, but that seems…. suspect, from a security standpoint. So, each time we create a new project site we need to make sure to grant that user select privileges for the new database. Also, simply granting the user those privileges isn’t enough in itself, the fedora component also needs to be configured actually to check the new database. This is done by adding an additional connection specification in the …/server/config/filter-drupal.xml file.
I think that fedora makes a seperate DB connection for each entry in the file, so at some point one runs into issues of scalability, where for any kind of islandora access to fedora data the system is checking against N databases. Hopefully fedora uses some sort of connection pooling and caching system to mitigate this somewhat, but I don’t really know.
In summary, to set up a new islandora-enabled instance of a drupal multi-site:
- have LDAP installed and configured for some primary site (which for the purposes of the example below uses a database called main_drupal)
- do all the usual mutli-site set-up stuff
- in the new site’s settings.php file, specify the primary site db as the prefix for the ldap_authorization and ldap_servers tables
$databases = array (
'database' => 'sitedbname',
'username' => 'a_db_user',
'password' => ',jdFN3952oiU54h6n2o987ytglaKEn68Yu34',
'host' => 'mysql-machine.institution.edu',
'port' => '',
'driver' => 'mysql',
'prefix' => array(
'default' => ''
,'ldap_authorization' => 'main_drupal.'
,'ldap_servers' => 'main_drupal.'
- in the database that backs drupal, grant the fedora db user selection access to the new database (probably really only need access to a few specific tables (users, users_roles, role), though that’s more work to specify and maintain)
GRANT SELECT ON SITEDBNAME.* TO 'fedora_mysql_user'@'fedora-machine.institution.edu';
- on the fedora host add an entry to …/server/config/filter-drupal.xml for the new database
<connection server="mysql-machine.institution.edu" dbname="sitedbname" user="fedora_mysql_user" password="nRExw890zV34hl56N245AV078kk45" port="3306">
SELECT DISTINCT u.uid AS userid, u.name AS Name, u.pass AS Pass, r.name AS Role FROM (users u LEFT JOIN users_roles ON u.uid=users_roles.uid) LEFT JOIN role r ON r.rid=users_roles.rid WHERE u.name=? AND u.pass=?;
- don’t forget to restart fedora so that the new filter-drupal stuff is used
Islandora 7 – splitting CSV data on ingest June 24, 2014Posted by ficial in code fixes, islandora, techy, xsl.
add a comment
It’s tricky to tokenize CSV values on ingest using a MODS form. To do so, create a self-transform XSL and manually tokenize the appropriate fields – create an XSL to do the tokenizing in …./sites/all/modules/islandora_xml_forms/builder/self_transforms/, then set that as the self-transform for the relevant form. You’ll need to create your own CSV tokenizer since Islandora 7 uses an older version of XSL. See below for example code.
In our Islandora install we’re using MODS as the main meta-data schema. That is, the ingest forms are set up for generating MODS XML. However, the way the form is set up is anti-helpful for some of the people that are doing our data loads. Specifically, the subject-topic, subject-geographic, and subject-temporal fields were not being processed as people expected.
Those three fields are multi-value ones, meaning they support a structure like:
However, when using the form we want to be able to enter them as CSV values – e.g. ‘cows, bovines, farm animals’. Unfortunately, the default behavior is to treat such as a single value, giving a result like:
<topic>cows, bovines, farm animals</topic>
The Islandora 7 ingest forms system does provide a place where this can be corrected, but it’s subtle and tricky. Specifically, one has to create an XSL to do the proper tokenizing and set that up as a ‘self transform’ for the form. Creating the tokenizing XSL is in turn made more difficult because Islandora 7 uses XSL earlier than 2.0, which means that there is no built in tokekizing function. The place this needs to be done is in …/sites/all/modules/islandora_xml_forms/builder/self_transforms/, which took me a while to find because I was mis-lead by the ‘builder’ folder – code in that folder relates not only to the building of forms, but also the using/processing of forms.
Following some suggestions on various sites, I organized my tokenizing code in a separate file and included/imported it into the self-transform. Here’s where I ended up:
<xsl:template name="csvtokenizer" >
<xsl:if test="normalize-space($commaStr) != ''">
<xsl:when test="contains($commaStr, ',')">
<xsl:with-param name="commaStr" select="substring-before($commaStr,',')"/>
<xsl:with-param name="tagLabel" select="$tagLabel"/>
<xsl:with-param name="commaStr" select="substring-after($commaStr,',')"/>
<xsl:with-param name="tagLabel" select="$tagLabel"/>
<xsl:if test="normalize-space($tagLabel) != ''">
<xsl:value-of select="substring($commaStr, string-length(substring-before($commaStr, substring(normalize-space($commaStr), 1, 1))) + 1)"/>
SELF TRANSFORM (cleanup_mods.xsl - NOTE: this also removes empty fields):
<?xml version="1.0" encoding="UTF-8"?>
<xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes" media-type="text/xml"/>
<xsl:with-param name="commaStr" select="normalize-space(.)"/>
<xsl:with-param name="tagLabel" select="'mods:topic'"/>
<xsl:with-param name="commaStr" select="normalize-space(.)"/>
<xsl:with-param name="tagLabel" select="'mods:geographic'"/>
<xsl:with-param name="commaStr" select="normalize-space(.)"/>
<xsl:with-param name="tagLabel" select="'mods:temporal'"/>
I could have combined the three tokenizing template matches into a single one with or-ed parameters and dynamic tag label, but I find the code here much easier to read and the maintenance cost very low.
The self-transform runs before any other transforms, so the splitting done here propagates downstreams without any further work.
Expectations in Game Design June 18, 2014Posted by ficial in game design, games.
Tags: expectations, game, game design
add a comment
A key part of a good game design is matching the players expectations to the game play actually delivered. This is one of the key points where theme is relevant (though certainly not the only one). More broadly speaking, expectation management is the primary issue of importance where the mechanical aspects of a game (specific rules, general complexity, length of play, etc.) intersect the non-mechanical aspects (theme, graphics, physical pieces, etc.). Kinds of expectations can be divided into two broad categories: mechanics, and experience.
Mechanical expectations have to do with how closely the rules of play match the assumptions / intuition of the player. Essentially, this is a process of drawing on the out-of-game assumed background that a player has (e.g. gold is worth more than silver, people need to be fed, wood burns, etc.) and using symbols and story to match that to the mechanical elements of play (i.e. the rules and game state). For example, if a game includes some bits labeled ‘coins’, then players naturally understand the idea of spending them to purchase a building, for example. If those bits are instead labeled ‘cows’ then it requires more explanation for a player to understand that some number of them may be converted into a building – it becomes a less intuitive rule. Conversely, if there’s a rule that says that at certain times having two of those bits allows a player to get a third then ‘cows’ makes sense in that two animals can be bred to prodcue another, whereas ‘coins’ requires something much more abstract to justify the addition. In this realm the theme of the game suggests the general kinds of actions that are and aren’t available the the kinds of outcomes that might be expected from those actions. The physical pieces both signify particular things (e.g. larger and heavier things are more important; given two kinds of markers, having one green and one brown is less meaningful than having one green and shaped like a leaf and the other brown and shaped like a horse), and suggest what those bits are used for (e.g. a gold colored disc to represent a valuable coin makes a lot more sense than a white cube; a token shaped like a bone might be fed to a dog, or used to build a skeleton; etc.). Graphics allow for illustrative suggestions of relevant rules, and also allow for easy reference to other parts of the game (via illustrations or more abstract icons / symbols).
When a designer has done a good job managing the mechanical expectations then the resulting game is much easier to learn, teach, and play. The actual play tends to be smooth, and faster than it otherwise might be. This is usally what people are talking about when they say a game is ‘well themed’ or ‘the theme is well matched’. When mechanical expectations are not managed well, then players have a hard time learning the game, and even after they learn it play tends to be slower and players are more likely to miss and/or to misinterpret rules. Criticisms tend to be things like ‘it didn’t make sense’ or ‘the theme was pasted on’. Over all, doing a good job with mechanical expectations turns a set of rules and abstract ideas into a good game. To turn a good game into a great game requires managing experiential expectations.
Player experience is the emotions and thoughts that a player has during the course of play — are players playing to have fun, or to compete? where/how does a player get a sense of accomplishment? when does the player feel the most tense, and why? how does player A feel about player B (in the context of game play)? does play feel deep and complex, or light? Experiential expectations are a much fuzzier concept than mechanical ones, in large part because player experience depends so much on the players themselves. The tools for setting expectations are the same – art, setting, iconography, language, story, etc. – but the goal in this case is not to draw parallels between exterior context and in-game elements, but instead to put players in a frame of mind where the experiences that the designer is attempting to create are easy to achieve and more intense when they happen. The art of the game can influence expectations via style, color scheme, size / prevalence, and subject. The setting can suggest particular feelings (e.g. when a player is told that a game is set in a dark cave then they’re much more prepared to feel limitation, enclosure, isolation, and fright than if they’re told the game is set in a sunny field). There are whole disciplines devoted to thinking about how iconography and typography affect a viewers feelings (https://www.google.com/?gws_rd=ssl#q=how+typography+affects+feelings). Language of course has a huge influence (compare ‘the triangle token follows the round token’ to ‘the tiger token stalks the farmer token’), and story or a less rigid narrative element allows an even more effective manipulation of player feelings. Video games can also borrow tricks from all the expertise the movie industry has developed – music, sound, motion, visual effects, background action, etc.
There’s also an interesting sub-set of experiences that can be thought of as having a target magnitude / degree – pretty much any aspect where a player might ask ‘how much’. (e.g. how much depth is there? how much cooperation? how much luck? etc.) For these ones, the target for managing expectations is actually slightly offset below (or ‘closer to neutral than’) the degree of effect the designer is trying to evoke during player. The relation of the actual experience to the expectation can greatly effect the intensity/excitement of the experience. Consider an experience for which the designer has established an expectation of level E, and a players actual experience at level A. When A is less than E then the player is bored / underwhelmed with that aspect of the game. When A is equal to E then the player is satisfied – the game delivered what it promised. When A is just a little bit more than E then the player is excited because the game has surpassed their expectations – this is the sweet spot of experiential expectation management. When A is a lot more than E then the player is overwhelmed and blocks out that part of the play experience or loses interest entirely.
Over all, setting the players expectations is a vital aspect of game design. A game that is mechanically good will be disliked if the players are expecting one thing but getting another, while a game that might be mechanically uninteresting or even quite flawed will be thoroughly enjoyed if it’s clear in the experience it delivers and that matches what the players want.