Planet Drupal

Subscribe to Planet Drupal feed
Updated: 1 hour 2 min ago

Paul Rowell: My journey in Drupal, 4 years on

Sun, 03/29/2015 - 20:38

Categories:

Drupal

I’m approaching the 4 year mark at my agency and along with it my 4 year mark working with Drupal. It’s been an interesting journey so far and I’ve learned a fair bit, but, as with anything in technology, there’s still a great deal left to discover. This is my journey so far and a few key points I learned along the way.

Blue Drop Shop: Camp Record Beta Test Three: MidCamp 2015

Sat, 03/28/2015 - 20:08

Categories:

Drupal

After my #epicfail that was BADCamp, to say that I was entering MidCamp with trepidation would be the understatement of the year. Two full days of sessions and a 1-and-1 track record was weighing heavily upon my soul. Add to the mix that I was coming directly off of a 5-day con my company runs, and responsible for MidCamp venue and catering logistics. Oh right, and I ran out of time to make instructions and train anyone else on setup, which only added to my on-site burden.

Testing is good.

After BADCamp, I added a powered 4-port USB hub to the kits, as well as an accessory pack for the H2N voice recorder, mainly for the powered A/C adapter and remote. All total, these two items bring the current cost of the kit to about $425.

In addition, at one of our venue walk-throughs, I was able to actually test the kits with the projectors UIC would be using. The units in two of the rooms had an unexplainable random few-second blackout of the screens, but the records were good and the rest of the rooms checked out.

Success.

After the mad scramble setting up three breakout rooms and the main stage leading up to the opening keynote, I can't begin to describe the feeling in the pit of my stomach after I pulled the USB stick after stopping the keynote recording. I can’t begin to describe the elation I felt after seeing a full record, complete with audio.

We hit a few snags with presenters not starting their records (fixable) and older PCs not connecting (possibly fixable), and a couple sessions that didn’t have audio (hello redundancy from the voice recorder). Aside from that, froboy and I were able to trim and upload all the successful records during the Sunday sprint.

A huge shout out also goes to jason.bell for helping me on-site with setups and capture. He helped me during Fox Valley’s camp, so I deputized him as soon as I saw him Friday morning.

Learnings.

With the addition of the powered USB hub, we no longer need to steal any ports from the presenter laptop. For all of the first day, we were unnecessarily hooking up the hub’s USB cable to the presenter laptop. Doing this caused a restart of the record kit. We did lose a session to a presenter laptop going to sleep, and I have to wonder whether we would have still captured it if the hub hadn’t been attached.

The VGA to HDMI dongle is too unreliable to be part of the kit. When used, either there was no connection, or it would cycle between on and off. Most, if not all, machines that didn’t have mini display port or direct HMDI out had full display port. I will be testing a display port to HDMI dongle for a more reliable option.

Redundant audio is essential. The default record format for the voice recorders is a WAV file. These are best quality, but enormous, which is why I failed at capturing most of BADCamp’s audio (RTFM, right?). By changing the settings to 192kbs MP3, two days of session audio barely made a dent in the 2GB cards that are included with the recorders. Thankfully, this saved three session records: two with no audio at all (still a mystery) and one with blown out audio.

Trimming and combining in YouTube is a thing. Kudos again to froboy for pointing me to YouTube’s editing capabilities. A couple sessions had split records (also a mystery), which we then stitched together after upload, and several sessions needed some pre- or post-record trimming. This can all be done in YouTube instead of using a video editor and re-encoding. Granted, YouTube takes what seems like forever to process, but it works and once you do the editing, you can forget about it.

There is a known issue with mini display port to HDMI where a green tint is added to the output. Setting the external PVR to 720p generally fixed this. There were a couple times where it didn’t, but switching either between direct HDMI or mini display port to HDMI seemed to resolve most of the issues. Sorry for the few presenters that opted for funky colors before we learned this during the camp. The recording is always fine, but the on-site experience is borked.

Finally, we need to tell presenters to adjust their energy saver settings. I take this for granted, because the con my company runs is for marketing people who present frequently, and this is basically just assumed to be set correctly. We are a more casual bunch and don’t fret when the laptop sleeps or the screen saver comes up during a presentation. Just move the cursor and roll with it. But that can kill a record...even with the Drupal Association kits. I do plan to test this, now that I’ve learned we don’t need any power at all from the presenter laptop, but it’s still an easy fix with documentation.

Next steps.

Documentation. I need to make simple instructions sheets to include with the kits. Overall, they are really easy to use and connect, but it’s completely unfamiliar territory. With foolproof instructions, presenters can be at ease and room monitors can be tasked with assisting without fear.

Packaging. With the mad dash to set these up — combined with hourly hookups — these were a hot mess on the podium. I’ll be working to tighten these up so they look less intimidating and take up less space. No idea what this entails yet, so I’ll gladly accept ideas.

Testing. As mentioned, I will test regular display port to HDMI, as well as various sleep states while recording.

Shipping. Because these kits are so light weight, part of the plan is to be able to share them with regional camps. There was a lot of interest from other organizers in these kits during the camp. Someone from Twin Cities even offered to purchase a kit to add to the mix, as long as they could borrow the others. A Pelican box with adjustable inserts would be just the ticket.

Sponsors. If you are willing to help finance this project, please contact me at kthull@bluedropshop.com. While Fox Valley Camp owns three kits and MidCamp owns one, wouldn’t it be great to have your branding on these as they make their way around the camp circuit? The equipment costs have (mostly) been reimbursed, but I’ve devoted a lot of time to testing and documenting the process, and will be spending more time with the next steps listed above.

Tags:

Amazee Labs: Drupal Camp Johannesburg 2015

Sat, 03/28/2015 - 19:52

Categories:

Drupal
Drupal Camp Johannesburg 2015

Today I had the pleasure to attend Johannesburg's Drupal Camp 2015.

The event was organized by DASA that is doing a stunning job in gathering and energizing South Africa's Drupal Community. From community subjects to Drupal 8, we got to see a lekker variety of talks including those by Michael and me on "Drupal 8" and "How to run a successful Drupal Shop".

Special thanks to the organizers Riaan, Renate, Adam, Greg and Robin. Up next will be Drupal Camp Cape Town in September 2015.

DrupalOnWindows: Adding native JSON storage support in Drupal 7 or how to mix RDBM with NoSQL

Sat, 03/28/2015 - 17:45

Categories:

Drupal
Language English

I rencently read a very interesting article of an all time .Net developer comparing the MEAN stack (Mongo-Express-Angluar-Node) with traditional .Net application design. (I can't post the link because I'm unable to find the article!!).

Among other things he compared the tedious process of adding a new field in the RDBM model (modifying the database, then the data layer, then views and controllers), whereas in the MEAN stack it was as simple as adding two lines of code to the UI.

More articles...

Lullabot: Drupalize.Me 2015 Spring Update

Fri, 03/27/2015 - 18:32

Categories:

Drupal

The Drupalize.Me team typically gets together each quarter to go over how we did with our goals and to plan out what we want to accomplish and prioritize in the upcoming quarter. These goals range from site upgrades to our next content sprints. A few weeks ago we all flew into Atlanta and did just that. We feel it is important to communicate to our members and the Drupal community at-large, what we've been doing in the world of Drupal training and what our plans are for the near future. What better way to do this than our own podcast.

Chromatic: Integrating Recurly and Drupal

Fri, 03/27/2015 - 15:21

Categories:

Drupal

If you’re working on a site that needs subscriptions, take a look at Recurly. Recurly’s biggest strength is its simple handling of subscriptions, billing, invoices, and all that goes along with it. But how do you get that integrated into your Drupal site? Let’s walk through it.

There are a handful of pieces that work to connect your Recurly account and your Drupal site.
  1. The Recurly PHP library.
  2. The recurly.js library (optional, but recommended).
  3. The Recurly module for Drupal.

The first thing you need to do is bookmark is the Recurly API documentation.
Note: The Drupal Recurly module is still using v2 of the API. A re-write of the module to support v3 is in the works, but we have few active maintainers right now (few meaning one, and you’re looking at her). If you find this module of use or potential interest, pop into the issue queue and lend a hand writing or reviewing patches!

Okay, now that I’ve gotten that pitch out of the way, let’s get started.

I’ll be using a new Recurly account and a fresh install of Drupal 7.35 on a local MAMP environment. I’ll also be using drush as I go along (Not using drush?! Stop reading this and get it set up, then come back. Your life will be easier and you’ll thank us.)

  1. The first step is to sign up at https://recurly.com/ and get your account set up with your subscription plan(s). Your account will start out in a sandbox mode, and once you have everything set up with Recurly (it’s a paid service), you can switch to production mode. For our production site, we have a separate account that’s entirely in sandbox mode just for dev and QA, which is nice for testing, knowing we can’t break anything.
  2. Recurly is dependent on the Libraries module, so make sure you’ve got that installed (7.x-2.x version). drush dl libraries && drush en libraries
  3. You’ll need the Recurly Client PHP library, which you’ll need to put into sites/all/libraries/recurly. This is also an open-source, community-supported library, using v2 of the Recurly API. If you’re using composer, you can set this as a dependency. You will probably have to make the libraries directory. From the root of your installation, run mkdir sites/all/libraries.
  4. You need the Recurly module, which comes with two sub-modules: Recurly Hosted Pages and Recurly.js. drush dl recurly && drush en recurly
  5. If you are using Recurly.js, you will need that library, v2 of which can be found here. This will need to be placed into sites/all/libraries/recurly-js.
    Your /libraries/ directory should look something like this now:
Which integration option is best for my site? There are three different ways to use Recurly with Drupal.

You can just use the library and the module, which include some built-in pages and basic functionality. If you need a great deal of customization and your own functionality, this might be the option for you.

Recurly offers hosted pages, for which there is also a Drupal sub-module. This is the least amount of integration with Drupal; your site won’t be handling any of the account management. If you are low on dev hours or availability, this may be a good option.

Thirdly, and this is the option we are using for one of our clients and demonstrating in this tutorial, you can use the recurly.js library (there is a sub-module to integrate this). Recurly.js is a client-side credit-card authorization service which keeps credit card data from ever touching your server. Users can then make payments directly from your site, but with much less responsibility on your end. You can still do a great deal of customization around the forms – this is what we do, as well as customized versions of the built-in pages.

Please note: Whichever of these options you choose, your site will still need a level of PCI-DSS Compliance (Payment Card Industry Data Security Standard). You can read more about PCI Compliance here. This is not prohibitively complex or difficult, and just requires a self-assessment questionnaire.

Settings

You should now have everything in the right place. Let’s get set up.

  1. Go to yoursite.dev/admin/config (just click Configuration at the top) and you’ll see Recurly under Web Services.
  2. You’ll now see a form with a handful of settings. Here’s where to find the values in your Recurly account. Once you set up a subscription plan in Recurly, you’ll find yourself on this page. On the right hand side, go to API Credentials. You may have to scroll down or collapse some menus in order to see it.
  3. Your Private API Key is the first key found on this page (I’ve blocked mine out):
  4. Next, you’ll need to go to Manage Transparent Post Keys on the right. You will not need the public key, as it’s not used in Recurly.js v2.
  5. Click to Enable Transparent Post and Recurly.js v2 API.
  6. Now you’ll see your key. This is the value you’ll enter into the Transparent Post Private Key field.
  7. The last basic setup step is to enter your subdomain. The help text for this field is currently incorrect as of 3/26/2015 and will be corrected in the next release. It is correct in the README file, and on the project page. There is no longer a -test suffix for sandbox mode. Copy your subdomain either from the address bar or from the Site Settings. You don’t need the entire url, so in my case, the subdomain is alanna-demo.
  8. With these settings, you can accept the rest of the default values and be ready to go. The rest of the configuration is specific to how you’d like to set up your account, how your subscription is configured, what fields you want to record in Recurly, how much custom configuration you want to do, and what functionality you need. The next step, if you are using Recurly’s built-in pages, is to enable your subscription plans. In Drupal, head over to the Subscription Plans tab and enable the plans you want to use on your site. Here I’ve just created one test plan in Recurly. Check the boxes next to the plan(s) you want enabled, and click Update Plans.
Getting Ready for Customers

So you have Recurly integrated, but how are people going to use it on your Drupal site? Good question. For this tutorial, we’ll use Recurly.js. Make sure you enable the submodule if you haven’t already: drush en recurlyjs. Now you’ll see some new options on the Recurly admin setting page.

I’m going to keep the defaults for this example. Now when you go to a user account page, you’ll see a Subscription tab with the option to sign up for a plan.

Clicking Sign up will bring you to the signup page provided by Recurly.js.

After filling out the fields and clicking Purchase, you’ll see a handful of brand new tabs. I set this subscription plan to have a trial period, which is reflected here.

Keep in mind, this is the default Drupal theme with no styling applied at all. If you head over to your Recurly account, you’ll see this new subscription.

There are a lot of configuration options, but your site is now integrated with Recurly. You can sign up, change, view, and cancel accounts. If you choose to use coupons, you can do that as well, and we’ve done all of this without any custom code.

If you have any questions, please read the documentation, or head over to the Recurly project page on Drupal.org and see if it’s answered in the issue queue. If not, make sure to submit your issue so that we can address it!

Deeson: Drush Registry Rebuild

Fri, 03/27/2015 - 14:30

Categories:

Drupal

Keep Calm and Clear Cache!

This is an often used phrase in Drupal land. Clearing cache fixes many issues that can occur in Drupal, usually after a change is made and then isn't being reflected on the site.

But sometimes, clearing cache isn't enough and a registry rebuild is in order.

The Drupal 7 registry contains an inventory of all classes and interfaces for all enabled modules and Drupal's core files. The registry stores the path to the file that a given class or interface is defined in, and loads the file when necessary. On occasion a class maybe moved or renamed and then Drupal doesn't know where to find it and what appears to be unrecoverable problems occur.

One such example might be if you move the location of a module. This can happen if you have taken over a site and all the contrib and custom modules are stored in the sites/all/modules folder and you want to separate that out into sites/all/modules/contrib and sites/all/modules/custom.  After moving the modules into your neat sub folders, things stop working and clearing caches doesn't seem to help.

Enter, registry rebuild.  This isn't a module, its a drush command. After downloading from drupal.org, the registry_rebuild folder should be placed into the directory sites/all/drush.

You should then clear the drush cache so drush knows about the new command

drush cc drush

Then you are ready to rebuild the registry

drush rr

Registry rebuild is a standard tool we use on all projects now and forms part of our deployment scripts when new code is deployed to an environment.

So the next time you feel yourself about to tear your hair out and you've run clear cache ten times, keep calm and give registry rebuild a try.

Joachim's blog: A script for making patches

Fri, 03/27/2015 - 13:46

Categories:

Drupal

I have a standard format for patchnames: 1234-99.project.brief-description.patch, where 1234 is the issue number and 99 is the (expected) comment number. However, it involves two copy-pastes: one for the issue number, taken from my browser, and one for the project name, taken from my command line prompt.

Some automation of this is clearly possible, especially as I usually name my git branches 1234-brief-description. More automation is less typing, and so in true XKCD condiment-passing style, I've now written that script, which you can find on github as dorgpatch. (The hardest part was thinking of a good name, and as you can see, in the end I gave up.)

Out of the components of the patch name, the issue number and description can be deduced from the current git branch, and the project from the current folder. For the comment number, a bit more work is needed: but drupal.org now has a public API, so a simple REST request to that gives us data about the issue node including the comment count.

So far, so good: we can generate the filename for a new patch. But really, the script should take care of doing the diff too. That's actually the trickiest part: figuring out which branch to diff against. It requires a bit of git branch wizardry to look at the branches that the current branch forks off from, and some regular expression matching to find one that looks like a Drupal development branch (i.e., 8.x-4.x, or 8.0.x). It's probably not perfect; I don't know if I accounted for a possibility such as 8.x-4.x branching off a 7.x-3.x which then has no further commits and so is also reachable from the feature branch.

The other thing this script can do is create a tests-only patch. These are useful, and generally advisable on drupal.org issues, to demonstrate that the test not only checks for the correct behaviour, but also fails for the problem that's being fixed. The script assumes that you have two branches: the one you're on, 1234-brief-description, and also one called 1234-tests, which contains only commits that change tests.

The git workflow to get to that point would be:

  1. Create the branch 1234-brief-description
  2. Make commits to fix the bug
  3. Create a branch 1234-tests
  4. Make commits to tests (I assume most people are like me, and write the tests after the fix)
  5. Move the string of commits that are only tests so they fork off at the same point as the feature branch: git rebase --onto 8.x-4.x 1234-brief-description 1234-tests
  6. Go back to 1234-brief-description and do: git merge 1234-tests, so the feature branch includes the tests.
  7. If you need to do further work on the tests, you can repeat with a temporary branch that you rebase onto the tip of 1234-tests. (Or you can cherry-pick the commits. Or do cherry-pick with git rev-list, which is a trick I discovered today.)

Next step will be having the script make an interdiff file, which is a task I find particularly fiddly.

Tags: gitpatchingdrupal.orgworkflow

agoradesign: Introducing the Outdated Browser module

Fri, 03/27/2015 - 13:25

Categories:

Drupal
We proudly present our first official drupal.org project, which we released about two months ago: the Outdated Browser module. It detects outdated browsers and advises users to upgrade to a new version - in a very pretty looking way.

Drupal Watchdog: Entity Storage, the Drupal 8 Way

Fri, 03/27/2015 - 12:18

Categories:

Drupal

In Drupal 7 the Field API introduced the concept of swappable field storage. This means that field data can live in any kind of storage, for instance a NoSQL database like MongoDB, provided that a corresponding backend is enabled in the system. This feature allows support of some nice use cases, like remotely-stored entity data or exploit storage backends that perform better in specific scenarios. However it also introduces some problems with entity querying, because a query involving conditions on two fields might end up needing to query two different storage backends, which may become impractical or simply unfeasible.

That's the main reason why in Drupal 8, we switched from field-based storage to entity-based storage, which means that all fields attached to an entity type share the same storage backend. This nicely resolves the querying issue without imposing any practical limitation, because to obtain a truly working system you were basically forced to configure all fields attached to the same entity type to share the same storage engine. The main feature that was dropped in the process, was the ability to share a field between different entity types, which was another design choice that introduced quite a few troubles on its own and had no compelling reason to exist.

With this change each entity type has a dedicated storage handler, that for fieldable entity types is responsible for loading, storing, and deleting field data. The storage handler is defined in the handlers section of the entity type definition, through the storage key (surprise!) and can be swapped by modules implementing hook_entity_type_alter().

Querying Entity Data

Since we now support pluggable storage backends, we need to write storage-agnostic contrib code. This means we cannot assume entities of any type will be stored in a SQL database, hence we need to rely more than ever on the Entity Query API, which is the successor of the Entity Field Query system available in Drupal 7. This API allows you to write complex queries involving relationships between entity types (implemented via entity reference fields) and aggregation, without making any assumption on the underlying storage. Each storage backend requires a corresponding entity query backend, translating the generic query into a storage-specific one. For instance, the default SQL query backend translates entity relationships to JOINs between entity data tables.

Entity identifiers can be obtained via an entity query or any other viable means, but existing entity (field) data should always be obtained from the storage handler via a load operation. Contrib module authors should be aware that retrieving partial entity data via direct DB queries is a deprecated approach and is strongly discouraged. In fact by doing this you are actually completely bypassing many layers of the Entity API, including the entity cache system, which is likely to make your code less performant than the recommended approach. Aside from that, your code will break as soon as the storage backend is changed, and may not work as intended with modules correctly exploiting the API. The only legal usage of backend-specific queries is when they cannot be expressed through the Entity Query API. However also in this case only entity identifiers should be retrieved and used to perform a regular (multiple) load operation.

Storage Schema

Probably one of the biggest changes introduced with the Entity Storage API, is that now the storage backend is responsible for managing its own schema, if it uses any. Entity type and field definitions are used to derive the information required to generate the storage schema. For instance the core SQL storage creates (and deletes) all the tables required to store data for the entity types it manages. An entity type can define a storage schema handler via the aptly-named storage_schema key in the handlers section of the entity type definition. However it does not need to define one if it has no use for it.

Updates are also supported, and they are managed via the regular DB updates UI, which means that the schema will be adapted when the entity type and field definitions change or are added or removed. The definition update manager also triggers some events for entity type and field definitions, that can be useful to react to the related changes. It is important to note that not all kind of changes are allowed: if a change implies a data migration, Drupal will refuse to apply it and a migration (or a manual adjustment) will be required to proceed.

This means that if a module requires an additional field on a particular entity type to implement its business logic, it just needs to provide a field definition and apply changes (there is also an API available to do this) and the system will do the rest. The schema will be created, if needed, and field data will be natively loaded and stored. This is definitely a good reason to define every piece of data attached to an entity type as a field. However if for any reason the system-provided storage were not a good fit, a field definition can specify that it has custom storage, which means the field provider will handle storage on its own. A typical example are computed fields, which may need no storage at all.

Core SQL Storage

The default storage backend provided by core is obviously SQL-based. It distinguishes between shared field tables and dedicated field tables: the former are used to store data for all the single-value base fields, that is fields attached to every bundle like the node title, while the latter are used to store data for multiple-value base fields and bundle fields, which are attached only to certain bundles. As the name suggests, dedicated tables store data for just one field.

The default storage supports four different shared table layouts depending on whether the entity type is translatable and/or revisionable:

  • Simple entity types use only a single table, the base table, to store all base field data. | entity_id | uuid | bundle_name | label | … |
  • Translatable entity types use two shared tables: the base table stores entity keys and metadata only, while the data table stores base field data per language. | entity_id | uuid | bundle_name | langcode | | entity_id | bundle_name | langcode | default_langcode | label | … |
  • Revisionable entity types also use two shared tables: the base table stores all base field data, while the revision table stores revision data for revisionable base fields and revision metadata. | entity_id | revision_id | uuid | bundle_name | label | … | | entity_id | revision_id | label | revision_timestamp | revision_uid | revision_log | … |
  • Translatable and revisionable entity types use four shared tables, combining the types described above: the base table stores entity keys and metadata only, the data table stores base field data per language, the revision table stores basic entity key revisions and revision metadata, and finally the revision data table stores base field revision data per language for revisionable fields. | entity_id | revision_id | uuid | bundle_name | langcode | | entity_id | revision_id | bundle_name | langcode | default_langcode | label | … | | entity_id | revision_id | langcode | revision_timestamp | revision_uid | revision_log | | entity_id | revision_id | langcode | default_langcode | label | … |

The SQL storage schema handler supports switching between these different table layouts, if the entity type definition changes and no data is stored yet.

Core SQL storage aims to support any table layout, hence modules explicitly targeting a SQL storage backend, like for instance Views, should rely on the Table Mapping API to build their queries. This API allows retrieval of information about where field data is stored and thus is helpful to build queries without hard-coding assumptions about a particular table layout. At least this is the theory, however core currently does not fully support this use case, as some required changes have not been implemented yet (more on this below). Core SQL implementations currently rely on the specialized DefaultTableMapping class, which assumes one of the four table layouts described above.

A Real Life Example

We will now have a look at a simple module exemplifying a typical use case: we want to display a list of active users having created at least one published node, along with the total number of nodes created by each user and the title of the most recent node. Basically a simple tracker.

Displaying such data with a single query can be complex and will usually lead to very poor performance, unless the number of users on the site is quite small. A typical solution in these cases is to rely on denormalized data that is calculated and stored in a way that makes it easy to query efficiently. In our case we will add two fields to the User entity type to track the last node and the total number of nodes created by each user:

function active_users_entity_base_field_info(EntityTypeInterface $entity_type) { $fields = []; if ($entity_type->id() == 'user') { $fields['last_created_node'] = BaseFieldDefinition::create('entity_reference') ->setLabel('Last created node') ->setRevisionable(TRUE) ->setSetting('target_type', 'node') ->setSetting('handler', 'default'); $fields['node_count'] = BaseFieldDefinition::create('integer') ->setLabel('Number of created nodes') ->setRevisionable(TRUE) ->setDefaultValue(0); } return $fields; }

Note that fields above are marked as revisionable so that if the User entity type itself is marked as revisionable, our fields will also be revisioned. The revisionable flag is ignored on non-revisionable entity types.

After enabling the module, the status report will warn us that there are DB updates to be applied. Once complete, we will have two new columns in our user_field_data table ready to store our data. We will now create a new ActiveUsersManager service responsible for encapsulating all our business logic. Let's add an ActiveUsersManager::onNodeCreated() method that will be called from a hook_node_insert implementation:

public function onNodeCreated(NodeInterface $node) { $user = $node->getOwner(); $user->last_created_node = $node; $user->node_count = $this->getNodeCount($user); $user->save(); } protected function getNodeCount(UserInterface $user) { $result = $this->nodeStorage->getAggregateQuery() ->aggregate('nid', 'COUNT') ->condition('uid', $user->id()) ->execute(); return $result[0]['nid_count']; }

As you can see this will track exactly the data we need, using an aggregated entity query to compute the number of created nodes.

Since we need to also act on node deletion (hook_node_delete), we need to add a few more methods:

public function onNodeDeleted(NodeInterface $node) { $user = $node->getOwner(); if ($user->last_created_node->target_id == $node->id()) { $user->last_created_node = $this->getLastCreatedNode($user); } $user->node_count = $this->getNodeCount($user); $user->save(); } protected function getLastCreatedNode(UserInterface $user) { $result = $this->nodeStorage->getQuery() ->condition('uid', $user->id()) ->sort('created', 'DESC') ->range(0, 1) ->execute(); return reset($result); }

In the case where the user's last created node is the one being deleted, we use a regular entity query to retrieve an updated identifier for the user's last created node.

Nice, but we still need to display our list. To accomplish this we add one last method to our manager service to retrieve the list of active users:

public function getActiveUsers() { $ids = $this->userStorage->getQuery() ->condition('status', 1) ->condition('node_count', 0, '>') ->condition('last_created_node.entity.status', 1) ->sort('login', 'DESC') ->execute(); return User::loadMultiple($ids); }

As you can see, in the entity query above we effectively expressed a relationship between the User entity and the Node entity, imposing a condition using the entity syntax, that is implemented through a JOIN by the SQL entity query backend.

Finally we can invoke this method in a separate controller class responsible for building the list markup:

public function view() { $rows = []; foreach ($this->manager->getActiveUsers() as $user) { $rows[]['data'] = [ String::checkPlain($user->label()), intval($user->node_count->value), String::checkPlain($user->last_created_node->entity->label()), ]; } return [ '#theme' => 'table', '#header' => [$this->t('User'), $this->t('Node count'), $this->t('Last created node')], '#rows' => $rows, ]; }

This approach is way more performant when numbers get big, as we are running a very fast query involving only a single JOIN on indexed columns. We could even skip it by adding more denormalized fields to our User entity, but I wanted to outline the power of the entity syntax. A possible further optimization would be collecting all the identifiers of the nodes whose titles are going to be displayed and preload them in a single multiple load operation preceding the loop.

Aside from the performance considerations, you should note that this code is fully portable: as long as the alternative backend complies with the Entity Storage and Query APIs, the result you will get will be the same. Pretty neat, huh?

What's Left?

What I have shown above is working code, you can use it right now in Drupal 8. However there are still quite some open issues before we can consider the Entity Storage API polished enough:

  • Switching between table layouts is supported by the API, but storage handlers for core entity types still assume the default table layouts, so they need to be adapted to rely on table mappings before we can actually change translatability or revisionability for their entity types. See https://www.drupal.org/node/2274017 and follow-ups.
  • In the example above we might have needed to add indexes to make our query more performant, for example, if we wanted to sort on the total number of nodes created. This is not supported yet, but of course «there's an issue for that!» See https://www.drupal.org/node/2258347.
  • There are cases when you need to provide an initial value for new fields, when entity data already exists. Think for instance to the File entity module, that needs to add a bundle column to the core File entity. Work is also in progress on this: https://www.drupal.org/node/2346019.
  • Last but not least, most of the time we don't want our users to go and run updates after enabling a module, that's bad UX! Instead a friendlier approach would be automatically applying updates under the hood. Guess what? You can join us at https://www.drupal.org/node/2346013.

Your help is welcome :)

So What?

We have seen the recommended ways to store and retrieve entity field data in Drupal 8, along with (just a few of) the advantages of relying on field definitions to write simple, powerful and portable code. Now, Drupal people, go and have fun!

Images: 

Annertech: Some recent fun we've had Mapping with Drupal

Fri, 03/27/2015 - 11:31

Categories:

Drupal
Some recent fun we've had Mapping with Drupal

People love maps. People love being able to visually understand how locations relate to each other. And since the advent of Google Maps, people love to pan and zoom, to click and swipe. But what people hate, is a shoddy mapping experience. Mapping can be hard, but fortunately, Drupal takes a lot of the pain away.

Why map?

You might want a map if you:

Chen Hui Jing: 542 days as a Drupal developer

Fri, 03/27/2015 - 00:00

Categories:

Drupal

I’ve just listened to the latest episode of the Modules Unraveled podcast by Bryan Lewis, which talked about The current job market in Drupal. And it made me think about my own journey as a Drupal developer, from zero to reasonably competent (I hope). The thing about this industry is that everything seems to move faster and faster. There’s a best new tool or framework released every other day. Developers are creating cool things all the time. And I feel like I’m constantly playing catch-up. But looking back to Day 1, I realised that I did make quite a bit of progress since then.

Learning on the job

I’ve been gainfully employed as a Drupal architect...

Isovera Ideas & Insights: 6 Ways to Increase ROI on Your Drupal Project

Thu, 03/26/2015 - 22:28

Categories:

Drupal
Drupal is a great choice for your enterprise-level web application project, and maybe even your website. Like every other framework under the sun, it’s also a terrible choice if you’re cavalier about the management of its implementation. Things go awry, scope creeps, budgets get drained, and hellfire rains down from the sky… You get it. The good news is that there are steps you can take to prevent this from happening.

Palantir: Increasing Velocity and Drupal 8

Thu, 03/26/2015 - 22:12

Categories:

Drupal

Palantir CEO Tiffany Farriss recently keynoted MidCamp here in Chicago where she spoke about the economics of Drupal contribution. In it, she explored some of the challenges of open-source contribution, showing how similar projects like Linux have managed growth and releases, and what the Drupal Association might do to help push things along toward a Drupal 8 release. You can give her presentation a watch here.

With this post, we want to highlight one of the important takeaways from the keynote: the Drupal 8 Accelerate Fund.

Some of you are clients and use Drupal for your sites, others build those sites for clients around the world, and still others provide technology that enhances Drupal greatly. We also know that Drupal 8 is going to be a game changer for us and our clients for a lot of reasons.

While we use a number of tools and technologies to drive success for our clients, Drupal is in our DNA. In addition to being a premium supporting partner of the Drupal Association, we also count amongst our team members prominent Drupal core and contributed module maintainers, initiative leads, and Drupal Association Board, Advisory Board, and Working Group members.

We've all done our part, but despite years of support and contributions from countless companies and individuals, we need to take a new approach to incentivize contributors to get Drupal 8 done. That’s where the Drupal 8 Accelerate Fund comes in.

Palantir is one of seven anchor donors who are raising funds alongside the Drupal Association to support Drupal 8 development. These efforts relate directly to the Drupal Association's mission of uniting a global open source community to build and promote Drupal, and will (and already have) support contributors directly through additional dollars for grants.

The fund breaks down like this:

  • The Drupal Association has contributed $62,500
  • The Drupal Association Board has raised another $62,500 from Anchor Donors
  • Now, the Drupal Association’s goal is to raise contributions from the Drupal community. This is the chance for everyone from end users to independents to Drupal shops to show your support for Drupal 8. Every dollar donated by the community has already been matched, doubling your impact. That means the total pool could be as much as $250,000 with your help.

 
Drupal 8 Accelerate is first and foremost about getting Drupal 8 to release. However, it’s also a pilot program for Drupal Association to obtain and provide financial support for the project. This is a recognition that, as a community, Drupal must find (and fund) a sustainable model for core development.

This is huge on a lot of levels, and those in the community have already seen the benefits with awards for sprints and other specific progress in D8. Now it’s our turn to rally. Give today and spread the word so we can all help move Drupal 8 a little closer to release.

Fundraising Websites - Crowdrise

 

Daniel Pocock: WebRTC: DruCall in Google Summer of Code 2015?

Thu, 03/26/2015 - 21:58

Categories:

Drupal

I've offered to help mentor a Google Summer of Code student to work on DruCall. Here is a link to the project details.

The original DruCall was based on SIPml5 and released in 2013 as a proof-of-concept.

It was later adapted to use JSCommunicator as the webphone implementation. JSCommunicator itself was updated by another GSoC student, Juliana Louback, in 2014.

It would be great to take DruCall further in 2015, here are some of the possibilities that are achievable in GSoC:

  • Updating it for Drupal 8
  • Support for logged-in users (currently it just makes anonymous calls, like a phone box)
  • Support for relaying shopping cart or other session cookie details to the call center operative who accepts the call
Help needed: could you be a co-mentor?

My background is in real-time and server-side infrastructure and I'm providing all the WebRTC SIP infrastructure that the student may need. However, for the project to have the most impact, it would also be helpful to have some input from a second mentor who knows about UI design, the Drupal way of doing things and maybe some Drupal 8 experience. Please contact me ASAP if you would be keen to participate either as a mentor or as a student. The deadline for student applications is just hours away but there is still more time for potential co-mentors to join in.

WebRTC at mini-DebConf Lyon in April

The next mini-DebConf takes place in Lyon, France on April 11 and 12. On the Saturday morning, there will be a brief WebRTC demo and there will be other opportunities to demo or test it and ask questions throughout the day. If you are interested in trying to get WebRTC into your web site, with or without Drupal, please see the RTC Quick Start guide.

Acquia: 2014 greatest hits - 30 Awesome Drupal 8 API Functions you Should Already Know - Fredric Mitchell

Thu, 03/26/2015 - 21:02

Categories:

Drupal
Language Undefined

Looking back on 2014, it was a great year of events and conversations with people in and around Acquia, open source, government, and business. I think I could happily repost at least 75% of the podcasts I published in 2014 as "greatest hits," but then we'd never get on to all the cool stuff I have been up to so far in 2015!

Nonetheless, here's one of my favorite recordings from 2014: a terrific session that will help you wrap your head around developing for Drupal 8 and a great conversation with Frederic Mitchell that covered the use of Drupal and open source in government, government decision-making versus corporate decision-making, designing Drupal 7 sites with Drupal 8 in mind, designing sites for the end users and where the maximum business value comes from in your organization, and more!

Chris Hall on Drupal 8: D8 theming first impression

Thu, 03/26/2015 - 20:11

Categories:

Drupal
D8 theming first impression Thu, 03/26/2015 - 20:11 chrishu Introduction

After upgrading this site to a nice shiny Beta, I was itching to try themeing on Drupal 8, I have left off up to now as a few simple experiments showed me that even a simple sub-theme broke quickly under the pace of Drupal change, now though I should be able to upgrade any efforts and improvements without too much difficulty.

I theme Drupal every now and again and spend more time doing back-end and server related work, I usually have to have a good understanding of the mechanics of the themeing though even when not actively doing it. 

Often in the past I have been at odds with the themeing philosophy of teams I am working with (and have had to capitulate when outnumbered ;)) as I am more in the camp and would rather strip out most of the guff that Drupal inserts and break away from the 'rails' that make many Drupal sites turn out kind of samey apparently the 33% camp.

Also when working with talented front-end developers who don't necessarily deal mostly with Drupal it seems such a shame to clip their wings, I would rather try and start with a theme like Mothership.

The challenge

The assumption I had was that Drupal 8 will be much easier to customise and "go your own way" than Drupal has ever been before. The mini-challenge I set myself was to re-implement the look from another site chris-david-hall.info  which runs on ExpressionEngine and use the same CSS stylesheet verbatim (in the end I changed one line). 

The theme is pretty basic, based on Bootstrap 3, but even despite that has a few elements of structure that are not very Drupally, so made an interesting experiment.

More than enough for my first attempt.

The result

Well this site no longer looks like a purple Bartik, and does bear more than a passing resemblance to the site I ripped the CSS from.

It was pretty easy to restructure things and Twig theming in Drupal is a massive improvement, I am now convinced that Drupal 8 wins hands down over Drupal 7 for themeability.

There is still a lot more stuff I could strip out, this was a first pass, I am going to take a breather and come back to it. I have a couple of style-sheets left from Drupal to keep the in-line editing and admin stuff (mostly) working. I would prefer to target those bits more selectively.

The theme is on Github, just for interest and comparison at the moment, but depending on later experiments might turn into something more generically useful. 

Still a few glitches

It is a bit difficult working out if I have done something wrong or whether I am encountering bugs in the Beta, I will take the time to find out if issues have been raised when I get the chance. There are problems, for example for an anonymous user the home link is always active and some blocks seem to leave a trace even when turned off for a page (which messes with detecting whether a sidebar is active for example), both of these problems also exhibit in Bartik though.

I plucked the theme from my site at chris-david-hall.info and needs a lot of work anyway, I am hoping to improve both sites in tandem now. 

Comments Add new comment Your name Subject Comment About text formats Restricted HTML
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd> <h4> <h5> <h6>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
Leave this field blank

Wunderkraut blog: How to combine two facet items in Facet API

Thu, 03/26/2015 - 19:31

Categories:

Drupal

How to change solr search query for one facet using these modules: Search API, Search API solr, facet API, Facet API bonus and some custom code.

I have configured these modules to have a search page showing content that you search for and a list of facet items that you can filter the search result on. In my case the facet items was an representation of node types that you could filter the search result with. There are tons of blogpost how to do that, Search API solr documentation.

The facet item list can look like this (list of node types to filter search result on):

- Foo (22) - Bar (18) - Elit (10) - Ipsum (9) - Ultricies (5) - Mattis (2) - Quam (1)

What I wonted to achieve was to combine two facet items to one so the list would look like this:

- Foo and Bar (40) - Elit (10) - Ipsum (9) - Ultricies (5) - Mattis (2) - Quam (1)

The solution was using Search API hook hook_search_api_solr_query_alter(). I need to only change the query for the facet Item (node type) "Foo" and try to include (node type) "Bar" in the search query. So I fetched the facet item name by digging deep into the argument "$query".

<?php
function YOUR_CUSTOM_MODULE_search_api_solr_query_alter(array &$call_args, SearchApiQueryInterface $query) {

  // Fetching the facet name to change solr query on.
  $facet_item = $query->getFilter()->getFilters();
  if (!empty($facet_item)) {
    $facet_item = $facet_item[0]->getFilters();

    if (!empty($facet_item[0])) {
      if (!empty($facet_item[0][1])) {
        $facet_item = $facet_item[0][1];
        // This is my facet item I wont to change solr query on "Foo" and also add node type "Bar" to the filter.
        if ($facet_item === 'foo') {
          $call_args['params']['fq'][0] = $call_args['params']['fq'][0] . ' OR  ss_type:"bar"';
        }
      }
    }
  }
}
?>

We have now altered the solr query, but the list looks the same, the only differences now is that if you click on the "Foo" facet you will get "Foo" and "Bar" (node type) nodes in the search result.

To change the facet item list, I used drupal hook_facet_items_alter() provided by contrib module Facet API Bonus

<?php
function YOUR_CUSTOM_MODULE_facet_items_alter(&$build, &$settings) {

  if ($settings->facet == "type") {

    // Save this number to add on the combined facet item.
    $number_of_bar = $build['bar']['#count'];

    foreach($build as $key => $item) {
      switch ($key) {
        case 'foo':
          // Change the title of facet item to represent two facet items
          // (Foo & Bar).
          $build['foo']["#markup"] = t('Foo and Bar');

          // Smash the count of hits together.
          if ($build['foo']['#count'] > 0) {
            $build['foo']['#count'] = $build['foo']['#count'] + $number_of_bar;
          }
          break;

        // Remove this facet item now when Foo item will include this node type in the search result.
        case 'bar':
          unset($build['bar']);
          break;
      }
    }
  }
}
?>

After this should the list look like we want.

- Foo and Bar (40) - Elit (10) - Ipsum (9) - Ultricies (5) - Mattis (2) - Quam (1)

I also have a text printed out by Facet API submodule Current Search. This module lets you add blocks with text and tokens. In my case I added text when you searched and filtered to inform the user what he just searched for and/or filtered on. This could be done by adding existing tokens in the configuration of Current Search module configuration page "admin/config/search/current_search". The problem for me was that the token provided was the facet items that facet API created and not the one I changed. So I needed to change the token with text "Foo" to "Foo & Bar". This can be accomplished by hook_tokens_alter().

<?php
function YOUR_CUSTOM_MODULE_tokens_alter(array &$replacements, array $context) {

  if (isset($replacements['facetapi_active:active-value'])) {
    switch ($replacements['[facetapi_active:active-value]']) {

      case 'Foo':
        $replacements['[facetapi_active:active-value]'] = 'Foo and Bar';
        break;
    }
  }
}
?>

And that's it.
Link to all code