Another DrupalCamp - 2 Day Event at Stanford Law School!
Session by Mark Casias, Senior Drupal Developer.
Backing Yourself into an Accessible Corner
Saturday, March 11
3PM
Room 280B
Learn more about Stanford Drupal Camp HERE.
Another DrupalCamp - 2 Day Event at Stanford Law School!
Session by Mark Casias, Senior Drupal Developer.
Backing Yourself into an Accessible Corner
Saturday, March 11
3PM
Room 280B
Learn more about Stanford Drupal Camp HERE.
Selected sessions for Drupalcon Baltimore have just been announced!
From security and Drupal 8 site building to digital transformation and international SEO, check out what the Mediacurrent team has in store for this year’s Drupalcon:
Speakers: Jay Callicott and Dan Polant
Session Track: Site Building
Building a Multilingual site can be intimidating. In Drupal 8, the tools for internationalization are better than ever but there’s still much to do to get up and running. To localize a Drupal 8 site you will need to know what modules to enable and how to configure them.
Not a coder? Not a problem - no coding required. In this session, we will walk you through step by step how to configure a multilingual site, referencing case study examples from VisitTheUSA.com and Habitat.org. We’ll show you how to configure content types and fields. We’ll show you how to translate text strings. We’ll show you how translation providers use connector modules to integrate with Drupal. And much, much more! LRead more about Jay and Dan's Multilingual in Drupal 8 session.
Speaker: Mark Shropshire
Session Track: Site Building
Guardr maintainers have worked with the security departments of corporations, U.S. banks, and the U.S. Federal Government, combining security standards to not only pick out some great hardening modules but also to configure them during install with hardened settings. Why download and configure individual modules whenGuardr can do the heavy-lifting for you?
In this session, attendees will learn aboutGuardr's philosophy, features, and how to start new projects with Guardr. Let's raise the bar on Drupal security with a more streamlined approach. Read more about Mark's security session.
Speakers: Carie Fisher and Chris Doherty
Session Track: UX/ Content Strategy
Even the best laid project plans can have gaps when the project shifts from ideation/design to front-end development. In this session, Chris and Carie will walk through the workflow of transforming an 'idea on paper' into a living styleguide, how having a styleguide benefits all involved in the process, from designer to client, from the beginning to the end of a project. Read more about Chris and Carie's session on how to master the handoff from design to development.
Speakers: Jen Slemp and Calvin Scharffs, Lingotek
Session Track: UX/Content Strategy
Even though search engines are becoming more sophisticated every day, International SEO is still incredibly complicated and getting search engines to serve language and country specific content can feel a little like witchcraft. Lucky for you, Drupal 8 is naturally search engine friendly and with a little planning, we’ll have you getting rankings in no time. Read more about Jen's SEO session.
Speaker: Shellie Hutchens
Session Track: Business
Omnichannel. IOT. Big Data. Personalization. Digital Transformation. If written today, PT Barnum’s famous phrase might have read, “There’s a buzzword born every minute.” But what do these phrases and trends really mean to your organization? Join us for a game of Buzzword Bingo where we plan to give you our take on these and other popular, oft-used phrases consuming the Marketing, Digital, and Business industries today. Read more about what you'll learn about Digital Transformation and Drupal from Shellie's session.
Speaker: Josh Linard
Session Track: Business
In order to continue to grow your career with Drupal, it’s important to understand where the digital, web design and development marketplace is going - and where Drupal fits into it. Who better to explore in this endeavor than those organizations who are trying to solve the most complex digital, IT and marketing challenges on a global scale: The Large Enterprise. This session is less about Gartner rankings and CMS comparisons, and more about you gaining an understanding how Drupal now fits into a global organization’s product roadmap in 2017 and beyond. After all, it’s not just an ECMS anymore! Read more about the state of Drupal in the Enterprise in Josh's session abstract.
Speakers: Ryan Gibson and Josh Linard
Session Track: Drupal Showcase
Rinnai is a worldwide leader in the manufacturing of water heaters and is the number one manufacturer of tankless water heaters in the US. Through the website, Rinnai customers connect directly with an extensive network of dealers using a suite of online tools to facilitate the process from initial product search to dealer follow-up.
Mediacurrent was tasked with developing the US and Canada partner and customer-facing sites in Drupal 8 to boost Rinnai's dealer network with more qualified customer leads, guide consumers through the process of deciding on the right product, empower the marketing team to manage content, and provide an overall more modern, inviting, and user-friendly experience for consumers and partners. In this session, we’ll show you the steps we took to accomplish these goals. Read more about Ryan and Josh's Drupal 8 session.
Additional Resources
Drupalcon NOLA: The People Behind the Usernames | Blog
Security Recap from Drupalcon New Orleans | Blog
The Real Value of Drupalcon | Blog
In the development community, diversity has become a buzzword; an initiative to be boldly undertaken. Here at Mediacurrent our team is about 25% women. We’re developers and project managers, digital strategists and designers, account supervisors and marketing professionals. Still, at some of the largest companies in the country, the number of women is dreadfully low. At Google only 17% of the computer professionals are women. At Facebook and Twitter, it’s less than 15%. There is no question that we are underrepresented in our industry, even as we make up more than half of the user base for our own products.
A study published in the journal Science this January found that “girls as young as 6 are less likely to consider women ‘really, really smart,’ childhood’s version of brilliance.” It’s also the age at which “girls begin to avoid activities said to be for children who are ‘really, really smart.’” Which, they believe, goes a long way to explaining why so few women go into STEM fields later in life.
Technology is considered a boys club, too technical for women, too hard. But that’s backward. Women were the first coders, the first creators of the algorithms and techniques we rely on every day as developers and engineers.
Ada Lovelace, in 1842, wrote the first algorithm intended to be carried out by a machine. This simple set of instructions, provided by holes in punchcards to Charles Babbage’s analytical engine, was the first computer ‘program.’ And women have been creating and revolutionizing the way we code ever since.
A little more than 100 years after Ada’s famous notes were published, Jean Jennings Bartik and her team of 6 women programmed the ENIAC, the first all-electronic, digital computer. The programming contained the first instances of subroutines, nesting and error control.
On its introduction to the public in 1946, ENIAC generated headlines across the country, but the men who built the machine were given all the credit. Bartik and her team were never mentioned.
Grace Hopper, “Grandma Cobol”, lead the team that created the first compiler. The A-0 system was the first to translate symbolic code into machine language. Not long after, the first widely used compiler, A-2, was released paving the way for the development of programming languages. Among these Hooper’s most famous contribution, COBOL. Hooper originated the idea that programs could be written in English. She saw that the “words” were simply another kind of symbol that the compiler could recognize and convert into machine code, setting the stage for the all the high-level programming languages we use today.
It was a woman that took Americans to the moon, too. Margaret Hamilton, Director of the Software Engineering Division of the MIT Instrumentation Laboratory, was the lead developer of in-flight software for the Apollo and Skylab programs in the late 1960s. Her addition of error recognition and display to the Apollo lander software allowed decisions to be made in real time, saving the landing from an unnecessary mission abort and has been called "the foundation for ultra-reliable software design." She founded Hamilton Technologies in the late 1980s, based on her paradigm ‘Development Before The Fact,’ a system that led to many of the systems design and software development techniques still used today.
More recently, but still 150 years after Ada’s first program, Barbara Liskov became one of the first women in the US to receive a Ph.D. in computer science. In 2004, she became the second, and last, woman awarded a John von Neumann Medal for "fundamental contributions to programming languages, programming methodology, and distributed systems." And in 2009 she was awarded the Turing Award for her work in the design of programming languages and software methodology that led to the development of object-oriented programming.
Women have a rich history in computer programming that often goes ignored. They slip, unknown and uncelebrated, into the background of great technological innovation. As Google VP Megan Smith puts it, “there is a lack of exposure to the work that technical women do.”
But ours is an industry built to change, based on agility and upheaval. A more diverse industry can only lead to better code and more innovative and effective products. Leslie Miley told the podcast Reply All“If you don’t have people of diverse backgrounds building your product, it will become very, very narrowly focused.”
Ellen Ullman, a writer and herself a coder, believes that the best way to get more women involved in coding is to teach them what it means to think like a programmer. She told The New Republic“They will understand that programs are written by people with particular values … and, since programs are human products, the values inherent in code can be changed.”
Groups like Girls Who Code, Girl Develop It, and Code Like a Girl are doing just that. Their mission is to provide a place for young girls to learn to code, with the express goal of closing the gender gap in the tech industry. And they have been successful, partnering with companies like Amazon, Adobe, Facebook, Salesforce and Yahoo to employ their graduates.
And this year’s DrupalCon is “consciously working to increase the diversity of the speakers who take to the stage at DrupalCon to ensure that they better represent the Drupal community. ” Almost ⅓ of the speakers at the Baltimore Conference are identified as diverse.
Women created the technologies that made this industry a force for changing the world, and we belong here. While it is still true that less than a quarter of all IT jobs are held by women, the industry is changing. The jobs may still be held mostly by young white men, but the technology innovation itself is not, and has never been, a boys club.
Additional Resources
Setting Diversity as a DrupalCon Goal | DrupalCon Baltimore
Mediacurrent to Present Seven Sessions | Blog
Empowering Women in Technology | Wunder Blog Post
Modal dialogs are great and provide a great experience for the end user - they allow for quick display of content in an overlay without navigating to another page. Getting forms to load and render properly in a modal can sometimes be a little tricky, but fortunately, it’s relatively straightforward to implement in Drupal 8.
We will be setting up a custom form containing a button that opens up another form in a modal using Drupal’s FormBuilder and AJAX API. So, let’s get started!
In this example, we will be created a custom module to contain all the files we need. The custom module directory structure should look like this:
modules └── custom └── modal_form_example ├── modal_form_example.info.yml ├── modal_form_example.module ├── modal_form_example.routing.yml └── src ├── Controller │ └── ModalFormExampleController.php └── Form ├── ExampleForm.php └── ModalForm.php
ModalFormExampleController.php
This controller contains a callback method openModalForm() which uses the form_builder service to build the form and uses the OpenModalDialogCommand to show it in the modal.
<?phpnamespace Drupal\modal_form_example\Controller; use Symfony\Component\DependencyInjection\ContainerInterface; use Drupal\Core\Ajax\AjaxResponse; use Drupal\Core\Ajax\OpenModalDialogCommand; use Drupal\Core\Controller\ControllerBase; use Drupal\Core\Form\FormBuilder; /** * ModalFormExampleController class. */classModalFormExampleControllerextends ControllerBase { /** * The form builder. * * @var \Drupal\Core\Form\FormBuilder */protected$formBuilder; /** * The ModalFormExampleController constructor. * * @param \Drupal\Core\Form\FormBuilder $formBuilder * The form builder. */publicfunction__construct(FormBuilder $formBuilder) { $this->formBuilder=$formBuilder; } /** * {@inheritdoc} * * @param \Symfony\Component\DependencyInjection\ContainerInterface $container * The Drupal service container. * * @return static */publicstaticfunctioncreate(ContainerInterface $container) { returnnewstatic( $container->get('form_builder') ); } /** * Callback for opening the modal form. */publicfunctionopenModalForm() { $response=new AjaxResponse(); // Get the modal form using the form builder.$modal_form=$this->formBuilder->getForm('Drupal\modal_form_example\Form\ModalForm'); // Add an AJAX command to open a modal dialog with the form as the content.$response->addCommand(new OpenModalDialogCommand('My Modal Form', $modal_form, ['width'=>'800'])); return$response; } }
Notice that OpenModalDialogCommand takes in 3 arguments - the modal title, the content to display (our modal form), and any dialog options. (In this example, we specified a width of 800 for the modal dialog).
modal_form_example.routing.yml
We set up 2 routes -
modal_form_example.form: path: '/admin/config/example_form' defaults: _form: 'Drupal\modal_form_example\Form\ExampleForm' _title: 'Example Form' requirements: _permission: 'administer site configuration' modal_form_example.open_modal_form: path: '/admin/config/modal_form' defaults: _title: 'Modal Form' _controller: '\Drupal\modal_form_example\Controller\ModalFormExampleController::openModalForm' requirements: _permission: 'administer site configuration' options: _admin_route: TRUE
Now, it’s time to create our forms!
ExampleForm.php <?phpnamespace Drupal\modal_form_example\Form; use Drupal\Core\Form\FormBase; use Drupal\Core\Form\FormStateInterface; use Drupal\Core\Url; /** * ExampleForm class. */classExampleFormextends FormBase { /** * {@inheritdoc} */publicfunctionbuildForm(array$form, FormStateInterface $form_state, $options=NULL) { $form['open_modal'] = [ '#type'=>'link', '#title'=>$this->t('Open Modal'), '#url'=> Url::fromRoute('modal_form_example.open_modal_form'), '#attributes'=> [ 'class'=> [ 'use-ajax', 'button', ], ], ]; // Attach the library for pop-up dialogs/modals.$form['#attached']['library'][] ='core/drupal.dialog.ajax'; return$form; } /** * {@inheritdoc} */publicfunctionsubmitForm(array&$form, FormStateInterface $form_state) {} /** * {@inheritdoc} */publicfunctiongetFormId() { return'modal_form_example_form'; } /** * Gets the configuration names that will be editable. * * @return array * An array of configuration object names that are editable if called in * conjunction with the trait's config() method. */protectedfunctiongetEditableConfigNames() { return ['config.modal_form_example_form']; } }
A form element called open_modal is created which will be the button to trigger our modal form. This element is created as a link type so we can specify a #url value which will be the \Drupal\Url object pointing to the custom route created earlier to trigger open the modal form (modal_form_example.open_modal_form).
One important line to include is:
$form['#attached']['library'][] = 'core/drupal.dialog.ajax';
This line “attaches” the core/drupal.dialog.ajax library to our form and is necessary to render the modal dialogs. Alternatively, you can include this as a dependency in your module’s *.info.yml file.
ModalForm.php
And, finally, the star of this example - the modal form itself!
For simplicity, the modal form will contain only a “required” checkbox and an AJAX “submit” button. On successful submission, a success message will display.
<?phpnamespace Drupal\modal_form_example\Form; use Drupal\Core\Form\FormBase; use Drupal\Core\Form\FormStateInterface; use Drupal\Core\Ajax\AjaxResponse; use Drupal\Core\Ajax\OpenModalDialogCommand; use Drupal\Core\Ajax\ReplaceCommand; /** * ModalForm class. */classModalFormextends FormBase { /** * {@inheritdoc} */publicfunctiongetFormId() { return'modal_form_example_modal_form'; } /** * {@inheritdoc} */publicfunctionbuildForm(array$form, FormStateInterface $form_state, $options=NULL) { $form['#prefix'] ='<div id="modal_example_form">'; $form['#suffix'] ='</div>'; // The status messages that will contain any form errors.$form['status_messages'] = [ '#type'=>'status_messages', '#weight'=>-10, ]; // A required checkbox field.$form['our_checkbox'] = [ '#type'=>'checkbox', '#title'=>$this->t('I Agree: modal forms are awesome!'), '#required'=>TRUE, ]; $form['actions'] =array('#type'=>'actions'); $form['actions']['send'] = [ '#type'=>'submit', '#value'=>$this->t('Submit modal form'), '#attributes'=> [ 'class'=> [ 'use-ajax', ], ], '#ajax'=> [ 'callback'=> [$this, 'submitModalFormAjax'], 'event'=>'click', ], ]; $form['#attached']['library'][] ='core/drupal.dialog.ajax'; return$form; } /** * AJAX callback handler that displays any errors or a success message. */publicfunctionsubmitModalFormAjax(array$form, FormStateInterface $form_state) { $response=new AjaxResponse(); // If there are any form errors, re-display the form.if ($form_state->hasAnyErrors()) { $response->addCommand(new ReplaceCommand('#modal_example_form', $form)); } else { $response->addCommand(new OpenModalDialogCommand("Success!", 'The modal form has been submitted.', ['width'=>800])); } return$response; } /** * {@inheritdoc} */publicfunctionvalidateForm(array&$form, FormStateInterface $form_state) {} /** * {@inheritdoc} */publicfunctionsubmitForm(array&$form, FormStateInterface $form_state) {} /** * Gets the configuration names that will be editable. * * @return array * An array of configuration object names that are editable if called in * conjunction with the trait's config() method. */protectedfunctiongetEditableConfigNames() { return ['config.modal_form_example_modal_form']; } }
We added an element of type status_messages to display any form validation errors in the modal dialog. Without this, the error messages set by drupal_set_messages() will display on the page itself on the next page load.
The AJAX submit callback submitModalFormAjax checks for any errors before displaying the success message.
And that’s really it! Let’s trigger open that modal window:
If there are any errors (i.e., if you submit the form without checking the required checkbox), the form will be updated to show the errors:
Otherwise, a success message will show:
Additional Resources
Flag Migrations in Drupal 8 | Blog
5 Command Line Tips and Tricks | Video
Migrating Content References in Drupal 8 | Blog
In the world of content management systems, a major anxiety for editorial staff is whether their site is going to allow them to easily build complex pages. With today's demand for editorial workflows and internationalization, this problem gets even more complex. Add Drupal's Lego-like architecture to the mix and there can be a huge array of options for a site builder or architect to consider. At the recent New England Drupal Camp, aka NEDCamp, I presented an overview of four competing options.
Drupal's internal content architecture is based upon a system called "entities" over which "fields" are added. In layperson's terms, the entities are analogous to tables in a database, and the fields are columns in those tables. In practice this is an imperfect analogy, but it's at least a starting point to help understand what this article is talking about.
For the new pages we're going to focus on the following fairly common set of requirements:
Many of our clients have requirements like the above, they look to add photos of varying sizes, videos, and lots of other types of data in their pages.
For any site looking to add functionality defined above, several aspects should be taken into consideration:
Bonus points may be awarded for the following:
Please note that I did not get into the front-end aspects of the architectures or user permissions, this article is focused on the architectural aspects of the functionality.
Drupal 8 includes several pieces that provide some portion of the requirements, but in general it falls very short.
The closest approximation to our requirements would entail creating multiple block types, one for each page component that was needed.
Once the page component structures are ready they need to be added to a content type.
To then create a custom page the following steps would need to be taken.
Should changes be needed, it would be necessary to use the "Custom block library" to find and then edit the required blocks. Note that once changes are made to a block they would be instantly visible to visitors, even if the Content Moderation functionality of Drupal 8.2 was being used to create a draft.
The data of the sub components, i.e. the blocks created above, is stored in regular field data stored in the block's tables, and those blocks are referenced by ID directly in the fields on the node's tables. This results in data structures that are very straightforward and very easy to query using Views or custom query logic.
Because this option uses core data structures there is (at least some) existing support for Migrate.
As can be guessed from the above, there are a number of problems with this plan.
The Paragraphs solution requires two modules - Paragraphs itself, and its sole dependency Entity Reference Revisions. Built to provide a better solution for adding groups of fields, and different groups of fields for e.g. different portions of a page, the module has become a very popular choice for building complex pages.
The process for creating the required editorial system for Paragraphs is similar to that of Option #1. While Option #1 used the blocks system and so created custom "Block types", so with Paragraphs it is necessary to create custom "Paragraph types". This system uses the exact same fields available to blocks, content types, etc, so it's "the same but different."
Similar to Block types, Paragraph type definitions are reusable, so they can be used with multiple content types (or other situations where they're needed). For example, a "Text with floating image" paragraph type might be used on a "Blog post" content type and a "Product" content type, while e.g. a "Product" content type might have a "Carousel" Paragraph type that's used to demo different poses of the product.
The general steps are:
Once the data structures are created, creating content follows a normal enough editorial processes.
Editing existing content follows the same process as always, just go to the node's "edit" form and edit away.
The data structure is almost identical to the core Entity Reference + Block Types solution (option #1), with the added note that the Entity Reference Revisions field that controls the system points to the individual *revision ID* of the referenced paragraph item, not its primary ID.
While there is no support for the Migrate system yet, at the time of writing that is actively being worked on and should be available soon.
There are some limitations or problems with the Paragraphs solution, though they're honestly smaller problems than some of the other options:
Some have said they do not like the editorial UX, but given it uses the core field system's UX it is at least a familiar process. Some discussions have started up about presenting an alternative UI, to make it more drag-n-droppy, and the maintainers of Field Collection have bounced around the idea of turning that module into an alternative UI. Personally, I think a grid-style UI could be rather good for some types of data, but we'll see what becomes of the initiatives.
The scenarios covered above all focused on directly connecting one type of entity to another using a type of field that refers one entity to the other. With Entity Embed the connection is indirect - the connection from the parent node to the children page components is managed using code embedded inside the WYSIWYG editor; it's reminiscent to the 1980's where we had to store special printer control codes in a word processing document to format the output, only this time we have a GUI.
Because this architecture uses the text formats rather than custom fields, it will be usable site-wide on any text field on any entity. This means that once the module is configured properly it can be used on any text field on the site with no additional setup, so taxonomy term descriptions, blog posts, user profile biography fields, etc, etc can take advantage of it.
Thanks to Drupal 8 having a WYSIWYG editor in core, this module only has one dependency - the Embed module.
The Entity Embed system builds upon the existing WYSIWYG functionality in Drupal core, and then allows different types of things to be inserted through that mechanism. Along with core's Editor and Filter modules, Entity Embed also requires the Embed module.
Setting up this system will, as with others, require creating either content types or block types for each type of page component that is needed, so again the fields system gives lots of flexibility for building the requirements as needed; see Option #1 for details on building out the page components.
Once the page component structures are built, further steps are needed to make them available within the WYSIWYG editor.
These steps will need to be repeated for each page component that needs to be added to the page.
One useful aspect of this option is that, as mentioned, everything is handled through the WYSIWYG editor on the relevant node-edit form. As a result, it becomes somewhat easier for editorial staff to manage their content.
To create a new landing page, create a new node as normal. The node will have the usual body field along with a WYSIWYG editor. What will be new to the editorial team are the new WYSIWYG editor toolbar buttons, which will allow the team to insert the page components as needed. Clicking on one of the buttons will open a popup allowing content to be linked. Besides that, the editorial experience is the same as with core's normal processes.
The Entity Embed system works off the normal input filters and uses custom control strings embedded via the WYSIWYG to do its work. This means that the data is stored in regular text fields, but it also means those strings have to be parsed via regular expression in order to extract the information. This makes working with these data structures cumbersome, and there's little chance of ever seeing Views support.
There is no Migrate support for this yet, and I would be surprised if anything could be made general enough to contribute, given how custom the data structures would be.
There are several reasons why this may be a reasonable way of handling the requirements.
There are some definite problems with this structure.
A staple for many Drupal 7 sites I built over the past number of years, Panelizer provides a different interface for embedding different page components, leveraging the Panels system instead of core's standard entity templating system. When paired with the In-Place Editor (IPE) from Panels it allows site editorial teams to drag ‘n drop page components around the page with little effort. For Drupal 8 the user interface and setup has changed a bit from Drupal 7, though once set up it appears to work fairly well. That said, there are some gotchas that will be covered shortly.
Drupal allows different displays of the same content for different situations, e.g. a teaser is displayed differently to a full page display. These display mechanisms are called "view modes". Panelizer goes further by allowing each view mode to be customized individually so that the teaser can be displayed one way while the full display (aka "Full content") is different.
The Panelizer module requires Panels, Page Manager and CTools modules, so they need to be installed too. Block types will need to be created for each page component, as with Option #1, and a new content type will also need to be added.
Note: there are two versions of Panelizer and Panels for use with Drupal 8. If using Drupal 8.2 then the 8.x-3.x versions of both Panels and Panelizer should be used, and the separate Layout Plugin module must also be installed. If using Drupal 8.3 then use the 8.x-4.x versions of Panels and Panelizer, and Layout Plugin is not needed as it was merged into core.
In order to allow individual nodes to be managed by Panelizer, a little bit of setup is needed (correct as of 8.x-4.0-beta1).
The default display will list each field and several meta elements (author, created timestamp, etc) in a single column, so it may be worthwhile to adjust the default display. Thankfully the default can be changed by following a few steps:
Day-to-day usage of Panelizer may take a little getting used to.
Panelizer's per-entity display customizations are stored in a field, so that level of the data structure is at least partly manageable. The data in the field's record is stored in an old fashioned serialized array, which contains identifiers for the types of items being positioned in the various layout regions, their per-instance settings, etc. In short, it's rather messy to work with and, as could be guessed, there isn't any Views support yet.
There is no Migrate support yet, though I believe the data structure is ironed out enough to where it could be done.
There are several problems with this solution.
There's definitely overlap between the IPE interface available on an individual Panelizer-enabled node and the Quick Edit and Outside-In systems in core; work is in progress to streamline this workflow and functionality for Drupal 8.4.
Option #1: Core | Option #2: Paragraphs | Option #3: Entity Embed | Option #4: Panelizer | |
---|---|---|---|---|
Stability | Stable | Stable | Beta/RC | Beta |
Edit process | Fields | Fields | WYSIWYG | Separate UI |
Add images | Yes | Yes | Yes | Yes |
Dynamically add pieces | Yes | Yes | Yes | Yes |
Dynamically add different pieces | Yes | Yes | Yes | Yes |
Combine blocks with content | Kinda | Kinda | Yes | Yes |
Combine Views wih content | Hard | Hard | Hard | Yes |
Editorial workflow | No | Yes | Not really | WIP |
Internationalization | Partial | Partial | Partial | Partial |
Extensible | Fields | Entities | Entities | Blocks, etc |
Views-compatible data | Yes | Yes | No | Unlikely |
Migrate integration | Yes | WIP | Custom | Not yet |
The not-so-obvious answer to the question on everyone's mind, which option is best, is that it depends upon the project's requirements. Some questions to help narrow down the choices would be:
My personal recommend is:
A common failing with reference-based systems is an inability to support revisions of the parent entity. For example, when editing an "article" node, anything that is in the node should go through the same editorial workflow as the node itself. For many years the "nested" data structures available to Drupal - Panelizer, Entity Reference, Fieldable Panels Panes, Field Collection, etc, did not support revisions. Some of us tried to add revision tracking to EntityReference but the discussion ultimately lead to a refrain of "this module will follow Drupal 8 core", only to see people dismiss the idea of adding the functionality to core because it hadn't been proven in the contrib world yet - a nasty catch 22.
(On an aside - yes, Field Collection did add support for revisions in 2012 but it had other limitations to make it an unsuitable solution for many sites, like still being in beta after six years)
The problem with not supporting revisions stems from sites wanting to have an editorial workflow around their content. Many sites want to have their content changes be reviewed by others – whether it's new content or changes to existing content – before it gets published to the world. Some sites will also want to publish items at a certain time, e.g. when an official announcement is made at a conference there might need to be changes scheduled to become visible for visitors. Other sites might have legal requirements that content be vetted by a lawyer before it is published and to then be able to track the text changes that the lawyer requests. If the items added to the page do not have their revision tied to the parent node's revision, these changes would be immediately visible to all visitors. For some businesses, this might be inconvenient, but for others it could lead to major legal ramifications.
With the advent of Paragraphs and Entity Reference Revision, and to a degree some recent improvements to Panelizer and Fieldable Panels Panes (and forthcoming improvements in Panelizer on D8), there is finally a consideration for the revision state of the node that these objects are attached to. Changes to a node can now be made with the full understanding that everything editable from the node's edit form will go through the same editorial review process, and that there won't be unexpected surprises when a portion of the page is published before the rest. This is how site editors have expected, and site builders hoped, their site to work for years, it is only now that the Drupal community has the stable tools to support this simple expectation.
The above does not cover every available option. As is always the case with Drupal, there were many other options that could have been considered but were excluded from the list for various reasons.
A module with a long history, Field Group provides a way of grouping multiple fields onto an entity's edit form. This doesn't really cover any of the use cases we needed - it's more of a cosmetic UX improvement rather than architecturally going beyond what core already provides. Many sites will use Field Group to group fields together into fieldsets or even new vertical tab groups, but it really doesn't do anything to help our use cases. In fact, the Field Group module is so ubiquitous that hopefully it will be added to core at some point.
This module started life as a spin-off of the work "multigroup" functionality of the ill-fated CCK v3 for Drupal 6 and provides a way of repeating a grouping aka "collection" of fields. The older Drupal's architecture left much to be desired for cleanly implementing this system, but Drupal 7's entity system finally provided a clean base to build from. As a result, the Drupal 7 release of Multigroup became a reference field with a custom entity type and was the defacto standard for several years for building such data structures. Over time as people realized they wanted editorial control over their content and so the necessary revisioning support was added; a similar need lead to support for translations using the Entity Translations module. All told, at this point is has a very similar functionality set to Paragraphs.
On the Drupal 8 front, in my opinion it suffered from a lack of momentum and in 2016 was on its third port attempt with no stable releases yet available, as compared to Paragraphs which hit its 1.0 release in July of that year. Seeing this situation unfold, and knowing that duplicate efforts can be a waste of effort and of the community's limited time, I suggested that the site-building world would be a lot simpler were Field Collections deprecated and efforts were instead put into improving Paragraphs. Thankfully, after some discussion, the maintainers saw the benefit and have started collaborating!
A late addition to the race, the Stacks module, seems like it may have some interesting potential. Currently, only a development snapshot is available for Drupal 8, with no details currently available on the maintainers' plans to get it to a stable release. However, what has been disclosed so far via some screencasts listed on the project page and promotional information on the maintainers' website it looks like there are lots of interesting ideas wrapped up in the module. So far it seems like a combination of Entity Reference, Inline Entity Form, a custom entity type, the Template Picker module and some UI customizations for good measure. For people new to it, the Template Picker module provides a method for having different displays available for each type of component based upon the available Twig files.
The key problem with the Stacks module is that, like Entity Reference in core (see option #1 above), it does not support tracking content changes by its revision so changes are published immediately. Given this is a key requirement for many sites, I don't recommend its use and instead would suggest people put their efforts into Paragraphs or one of the other solutions.
I just discovered one more "nested entities" system that had slipped past my radar - Bricks. This system has actually been around for a few years and is currently at v4.5 for Drupal 7. Again, this lets you nest other entities inside of a parent entity, like Option 1. Again, like Stacks, this module provides an alternative UI for managing the nested entities, with some definitely interesting ideas. However, again, like Stacks, it fails to properly support revisions so right now I cannot recommend it.
The new UX paradigm for Drupal core, "outside in", presents an interesting shift in Drupal's editorial workflow. Because this was only added in Drupal 8.2, which was only released a few months ago, the contrib world hasn't caught up yet to supporting or taking advantage of the new facilities. What I'm personally hoping for is that some of the data structure tools (Paragraphs, etc) will start to support the new processes and ultimately build a bridge so editorial teams can take advantage of the usability improvements.
Drupal 8 core provides for solid REST capabilities out-of-the-box, which is great for integrating with a web service or allowing a third-party application to consume content. However, the REST output provided by Drupal core is in a certain structure that may not necessarily satisfy the requirements as per the structure the consuming application expects.
In comes normalizers that will help us alter the REST response to our liking. For this example, we will be looking at altering the JSON response for node entities.
First, let’s install and enable the latest stable version of the “REST UI” module:
composer require drupal/restui;
drush en restui -y;
Go to the REST UI page (/admin/config/services/rest) and enable the “Content” resource:
You should see the resource enabled:
Enable “GET”, check “json” as the format, and “cookie” as the authentication provider:
Enable Drupal core’s “rest” module. This will also enable the “Serialization” module as it is a dependency:
Create a test node and fill in one or all of the fields. We will be requesting and altering the structure of the core node REST resource for this node output.
After you’ve created your node, append ?_format=json to the end of your node’s URL (so it looks something like /node/1?_format=json) and access that page. You should see a JSON dump with the field names and the values for the node entity similar to:
This is great, but what if we wanted this output to be structured differently? We can normalize this output!
Create a custom module that will contain our custom normalizers. The module structure should look like:
custom/
├── example_normalizer │ ├── example_normalizer.info.yml │ ├── example_normalizer.module
│ ├── example_normalizer.services.yml
│ └── src
│ └── Normalizer
│ ├── ArticleNodeEntityNormalizer.php
│ ├── CustomTypedDataNormalizer.php
│ └── NodeEntityNormalizer.php
Each normalizer must extend NormalizerBase.php and implement NormalizerInterface.php. At the minimum, the normalize must define:
protected $supportedInterfaceOrClass - the interface or class that the normalizer supports.
public function normalize($object, $format = null, array $context = array()) {}- performs the actual “normalizing” of an object into a set of arrays/scalars.
Let’s write a normalizer to remove those nested field “value” keys:
CustomTypedDataNormalizer.php
<?phpnamespace Drupal\example_normalizer\Normalizer; use Drupal\serialization\Normalizer\NormalizerBase; /** * Converts typed data objects to arrays. */classCustomTypedDataNormalizerextends NormalizerBase { /** * The interface or class that this Normalizer supports. * * @var string */protected$supportedInterfaceOrClass='Drupal\Core\TypedData\TypedDataInterface'; /** * {@inheritdoc} */publicfunctionnormalize($object, $format=NULL, array$context=array()) { $value=$object->getValue(); if (isset($value[0]) &&isset($value[0]['value'])) { $value=$value[0]['value']; } return$value; } }
We set our $supportedInterfaceOrClass protected property to Drupal\Core\TypedData\TypedDataInterface (so we can make some low-level modifications with the values for the entity). This means that this normalizer supports any object that is an instance of Drupal\Core\TypedData\TypedDataInterface. In the normalize() method, we check if the value contains the [0][‘value’] elements, and if so, just return the plain value stored in there. This will effectively remove the “value” keys from the output.
example_normalizer.services.yml
We need to allow Drupal to detect this normalizer, so we put it in our *services.yml and tag the service with “normalizer”:
services:
example_normalizer.typed_data:
class: Drupal\example_normalizer\Normalizer\CustomTypedDataNormalizer
tags:
- { name: normalizer, priority: 2 }
One important thing to notice here is the “priority” value. By default, the “serialization” module provides a normalizer for typed data:
serializer.normalizer.typed_data:
class: Drupal\serialization\Normalizer\TypedDataNormalizer
tags:
- { name: normalizer }
In order to have our custom normalizer get picked up first, we need to set a priority higher than the one that already exists that supports the same interface/class. When the serializer requests the normalize operation, it will process each normalizer sequentially until it finds one that applies. This is the “Chain-of-Responsibility” (COR) pattern used by Drupal 8 where each service processes the objects it supports and the rest are passed to the next processing service in the chain.
Make sure to clear the cache so the new normalizer service is detected.
If we go to our JSON output for the node again, we can see that the output has changed a bit. We no longer see the nested “values” keys being displayed (and looks much cleaner, as well):
Great! Now, what if we want to add some custom values to our output? Let’s say we want the link to the node and an ISO 8601-formatted “changed” timestamp.
We can create a normalizer that will make these modifications. Add an entry in the *services.yml file:
example_normalizer.node_entity:
class: Drupal\example_normalizer\Normalizer\NodeEntityNormalizer
arguments: ['@entity.manager']
tags:
- { name: normalizer, priority: 8 }
And our normalizer:
NodeEntityNormalizer.php
<?phpnamespace Drupal\example_normalizer\Normalizer; use Drupal\serialization\Normalizer\ContentEntityNormalizer; use Drupal\Core\Datetime\DrupalDateTime; /** * Converts the Drupal entity object structures to a normalized array. */classNodeEntityNormalizerextends ContentEntityNormalizer { /** * The interface or class that this Normalizer supports. * * @var string */protected$supportedInterfaceOrClass='Drupal\node\NodeInterface'; /** * {@inheritdoc} */publicfunctionnormalize($entity, $format=NULL, array$context=array()) { $attributes=parent::normalize($entity, $format, $context); // Convert the 'changed' timestamp to ISO 8601 format.$changed_timestamp=$entity->getChangedTime(); $changed_date= DrupalDateTime::createFromTimestamp($changed_timestamp); $attributes['changed_iso8601'] =$changed_date->format('c'); // The link to the node entity.$attributes['link'] =$entity->toUrl()->toString(); // Re-sort the array after our new additions.ksort($attributes); // Return the $attributes with our new values.return$attributes; } }
Similar to before, we have to define our supported interface or class. For this normalizer, we only want to support node entities, so we set it to Drupal\node\NodeInterface (which means any object that implements Drupal\node\NodeInterface).
Our custom node normalizer extends the Drupal\serialization\Normalizer\ContentEntityNormalizer class that is provided by the “serialization” module. The only thing we want to do is append 2 new values to the output to what is already provided -- the “link” and “changed_iso8601” values.
To format our timestamp, we get the timestamp from the entity object, create a DrupalDateTime object, and then format it using the PHP date format “c” character for ISO 8601. This value will be assigned to our new key on the $attributes array that holds all the values.
We will create the “link” value by using the toUrl() and toString() methods to get the URL which gets assigned to the “link” key of the $attributes array.
After clearing cache and visiting the node output again, we do indeed see our new additions:
So, we were able to alter the output for all nodes, but there may be cases where we would only want to alter the JSON output of specific node types. Fortunately, there isn’t much more to do than what we already learned. We will create a normalizer that contains a custom “changed” timestamp format that will only apply to “article” nodes.
Add another entry to our *services.yml file:
example_normalizer.article_node_entity:
class: Drupal\example_normalizer\Normalizer\ArticleNodeEntityNormalizer
arguments: ['@entity.manager']
tags:
- { name: normalizer, priority: 9 }
ArticleNodeEntityNormalizer.php
<?phpnamespace Drupal\example_normalizer\Normalizer; use Drupal\serialization\Normalizer\ContentEntityNormalizer; use Drupal\Core\Datetime\DrupalDateTime; use Drupal\node\NodeInterface; /** * Converts the Drupal entity object structures to a normalized array. */classArticleNodeEntityNormalizerextends ContentEntityNormalizer { /** * The interface or class that this Normalizer supports. * * @var string */protected$supportedInterfaceOrClass='Drupal\node\NodeInterface'; /** * {@inheritdoc} */publicfunctionsupportsNormalization($data, $format=NULL) { // If we aren't dealing with an object or the format is not supported return// now.if (!is_object($data) ||!$this->checkFormat($format)) { returnFALSE; } // This custom normalizer should be supported for "Article" nodes.if ($data instanceof NodeInterface &&$data->getType() =='article') { returnTRUE; } // Otherwise, this normalizer does not support the $data object.returnFALSE; } /** * {@inheritdoc} */publicfunctionnormalize($entity, $format=NULL, array$context=array()) { $attributes=parent::normalize($entity, $format, $context); // Convert the 'changed' timestamp to ISO 8601 format.$changed_timestamp=$entity->getChangedTime(); $changed_date= DrupalDateTime::createFromTimestamp($changed_timestamp); $attributes['article_changed_format'] =$changed_date->format('m/d/Y'); // Re-sort the array after our new addition.ksort($attributes); // Return the $attributes with our new value.return$attributes; } }
In this normalizer, you will notice that we’re defining a new method. public function supportsNormalization($data, $format = null) {} allows for performing more granular checks for the objects that are instances of the class/interface defined in $supportedInterfaceOrClass. In our supportsNormalization(), we check if the type of the node is an “article” using the getType() function. If so, it returns TRUE - indicating that this normalizer supports the node. Otherwise, it will return FALSE - and using the COR pattern, it will process the next normalizer in sequence until it finds one that applies to the object.
Let’s take a look at the JSON output of a test “article” node. We do indeed see our custom attribute “article_changed_format” from our custom “article” normalizer:
When we look at the REST response of another content type (like a “page”), we do not see this custom attribute because it is not an “article” and the normalizer did not apply to it. It does, however, pick up the next normalizer in sequence, which happens to be the custom node normalizer we created earlier:
In determining how to implement a normalizer for your complex data, the normalizers in the “serialization” module serve as a great guide in learning how normalizers work and how different types of data are normalized.
The Serialization API documentation is also a great reference for how normalizers work in the serialization process.
Additional Resources
8 Useful Snippets for the Drupal 8 Rest Module | Mediacurrent Blog
Creating and Updating Comments with Drupal's REST Services and Javascript | Mediacurrent Blog
Loading and Rendering Modal Forms in Drupal 8 | Mediacurrent Blog
JSON Web Tokens (JWT) are an open, industry standard RFC 7519 to represent a set of information securely between two parties. JWTs are commonly used for authentication to routes, services, and resources and are digitally signed, which enables secure transmission of information that is verified and trusted. Seen as a more modern approach to authentication, JWTs serve as a robust alternative to traditional authentication models - eliminating the need to pass sessions or credentials repeatedly to the server.
In this post, I outline the benefits of JWTs and its advantages over session-based authentication. I also share a step-by-step guide to setting up JWT authentication in Drupal 8 for authenticating requests to protected core REST resources.
A JWT contains 3 parts:
Example of a JWT:
Authorization: Bearer <token>
We will go through the process of setting up JWT in Drupal 8 to authenticate some core REST resources
The JWT module provides an authentication provider that uses JWTs that we can enable for our REST endpoints. It has a dependency on the Key module which should also be enabled. The firebase/php-jwt package library will be included for encoding and decoding the JWTs.
The module can easily be installed with composer:
composer require drupal/jwt;
Enable the module:
drush en jwt -y;
We also want to enable the “JWT Auth Consumer” and “JWT Auth Issuer” modules that come with the JWT module (these modules will allow Drupal to issue and consume JWTs):
drush en jwt_auth_consumer jwt_auth_issuer -y;
To generate and validate JWTs, a secret key is needed. Go to /admin/config/system/keys/add (or the “Configuration” page and click “Keys”).On the form:
Example:
Save the key.
Now, we need to tell the JWT module to use the key we just created as the secret.Go to /admin/config/system/jwt (or the “Configuration” page and click “JWT Authentication”).Under “Algorithm”, select the algorithm that was set for your secret key. In our case, it will be “HMAC using SHA-512 (HS512)”.Under “Secret”, choose the secret key you created.Example:
Save the configuration.
In this example, we will be applying JWT authentication to a core REST resource, so we need to make sure the REST module and our resource are enabled:
Install the REST UI module:
composer require drupal/restui;
Enable the modules:
drush en rest -y;
drush en restui -y;
Go to /admin/config/services/rest and enable the “Content” resource:
Click “Edit” on the “Content” resource.
Enable the “GET” method, check the “json” option under “Accepted request formats”, and “jwt_auth” under “Authentication providers”:
Save the configuration.As of Drupal 8.2.0, accessing entities via REST no longer requires REST-specific permissions. As a result, whether or not a user has access to an entity via REST depends on their permissions to access that entity. For the sake of our example, let’s prevent anonymous users from accessing any published content, which also restricts them from accessing those entities via REST:
We will create and publish a test “article” node for which we will be accessing its JSON output via REST. You can create any node you’d like. Once created, the JSON output for that node can be viewed by accessing /node/1?_format=json).
Now that our REST resource has been configured to use JWT Auth and anonymous users are restricted from accessing entities via REST, it’s time to test out the JWT authentication.
Open your favorite REST client like Postman and enter the URL for the node’s JSON output (i.e., /node/1?_format=json). Using the GET method, hit “Send”. The response will be empty because we are trying to access this resource as an anonymous user (and we prevented anonymous users from accessing published nodes for this example):
Since our REST resource is protected with JWT Authentication, we need to pass in a JWT to authenticate us. Logged in as an “administrator” user, visit /jwt/token to retrieve a JWT. You should see the JWT displayed when you visit that route:
Copy this entire token and add an “Authorization” header in your REST client with a value of “Bearer <token>” where <token> is the JWT:
Hit “Send” again and voilà - a response is returned:
Let’s discuss what’s happened here.The JWT module provides 3 events -- VALIDATE, VALID, and GENERATE -- which are dispatched by the event dispatcher. Custom validations and logic can be incorporated by creating your own event subscribers that subscribes to these events (more information about the events can be found in JwtAuthEvents.php).
The VALID event fires after the token has been validated (in other words, the JWT signature is verified as legitimate). The user is then loaded from the “uid” that is specified in the JWT “payload” (see loadUser() in JwtAuthConsumerSubscriber.php). Note that if no “uid” is in the payload, the user is “anonymous”. The JWT Authentication provider (JwtAuth.php) has an authenticate() method that validates this user and if the validation succeeds, that user will be able to access the REST resource. Because the JWT passed into the request’s header contains the “uid” of the “administrator” user -- and provided that administrators are able to access this resource -- the response is returned successfully.
We can also use the JWT Debugger to analyze any JWT tokens. If we paste our JWT in the debugger and put in our secret key, we can see a confirmation that our signature has been verified. We can also see that the JWT “payload” contains the administrator user’s uid and the expiration timestamp:
There is a good deal of publicly-accessible content comparing enterprise-level Content Management Systems (CMS) in terms of features, functionality, and cost. Each CMS comes with its own strengths and weaknesses in light of an organization’s requirements, and it behooves the organization to read up on these comparisons and consult with digital agencies like Mediacurrent before deciding on which CMS to use.
One oft-cited annual report is Gartner’s magic quadrant report comparing CMSs at the Enterprise Level. At this writing, the most recent report places Acquia/Drupal, Adobe Experience Manager (AEM), and Sitecore as the three leaders in the field, based on both their completeness of vision and their ability to execute on organizational requirements. In an effort to go into more detail on those three CMS’s, this two-part blog post compares Drupal and AEM, with a future blog post comparing Drupal and Sitecore.
How two CMSs compare depends largely on the perspective of the type of stakeholder. Stakeholders can include content authors, marketers, developers, decision-makers, and more. Part 1 of this blog post focuses on three perspectives: the Content Author’s perspective, the Marketer’s perspective, and the Business perspective. Part 2 will focus on the IT and Community perspectives.
An obvious caveat: as a long-time Drupalist and Mediacurrent employee, I’m a biased observer. However, I endeavor to be objective in this blog post -- indeed, I admire Adobe and what they’ve created with AEM -- and I highlight AEM’s advantages as they present themselves.
AEM’s strongest suit is its user experience for content authors. For those familiar with more personal-level site builders like Squarespace or Wix, AEM’s authoring experience is similar, yet more configurable. AEM features a highly flexible, drag-and-drop user interface for many content authoring tasks, and a tight integration with many of Adobe’s other technologies. Let’s take the authoring of an article as an example. A content author can begin creating an article using one of many pre-existing templates. An article’s content can be created or edited using what Adobe calls “content fragments”. These are small pieces of content (text, images, or other media) that can be dragged into an article, and reused across articles. They are also inline editable. Other non-text items can be dragged in too, including images, video, and even interactive Javascript widgets. Here’s a drag-and-drop example from AEM’s online documentation:
This is impressive functionality. On the Drupal side, a combination of Drupal modules can match this functionality. Drag and drop functionality can be achieved either with the Panels module, or by using Acquia’s Lift service and accompanying Lift Connector module.
Note also that in the AEM example, content is broken up into small pieces, rather than one monolithic body. This allows for more granular control and reuse of content. In Drupal, monolithic content can be broken up with the Paragraphs module, a favorite of Mediacurrent’s. This module enables end users to choose on-the-fly between predefined Paragraph Types independent from one another, where a Paragraph Type is any unit of content (e.g. a text block, image, slideshow, etc.).
Drupal’s Entity Construction Kit module provides alternative means of editing with more granularity and reuse.
AEM also allows the author to edit content in place while in view mode, without going into a full edit mode.
Drupal’s parallel is the Quick Edit module, which allows content to be edited in place, thusly:
Zooming out, AEM displays lists of content much like Drupal’s Views module does, allowing options of viewing by list, thumbnails, and more. These displays are generated in a very different way, however. AEM’s parallel to Drupal’s Views is call Collections, which work somewhat like organizing content with folders. To add items to a Collection, the AEM user drags items into it, thusly:
When the AEM Collection is built, it can be displayed as a list, or as thumbnails, and more. Here’s a thumbnails example:
AEM Collections have a filtering mechanism that filters by file type, file size, last update, and more:
Drupal’s Views module takes a different approach. Views retrieves its data through a database query. It has a robust filtering mechanism geared toward more structured content and can output lists, thumbnails, etc. like AEM. It offers greater power and flexibility than the AEM mechanism, at the cost of a steeper learning curve. Views is one of the most heralded and popular features of Drupal, so much so that it is now included in core for Drupal 8.
AEM also provides well-arranged visual sitemap functionality, more visually appealing than that of Drupal’s text-based Sitemap module.
AEM sitemap (from youtube.com/watch?v=M0skmH5HEJo)
However, the Drupal’s Sitemap module can employ a CSS library, like that of Slickmap, to present a more graphical interface in Drupal.
Another strength of AEM is its layout manager, which allows for in-page layout editing. It allows grid layout changes on the fly, without leaving the page. The user can dynamically drag layout borders and grids to size. Here again, the user experience is similar to that of the more modern personal blogging platforms like Wix.
For managing layouts beyond custom coding, Drupal’s popular Panels module allows the user to pick and choose layouts.
AEM integrates with Adobe Marketing Cloud (AMC), leveraging AMC’s powerful capabilities in a seamless manner. To discuss all of AMC’s marketing capabilities is outside the scope of this blog post, but at a high level, AMC creates highly personalized user experiences based both on profile data the user enters and by observing and adapting to user behavior. These personalized experiences are delivered across and optimized for multiple channels. AEM can integrate with AMC to capture and perform analytics on usage statistics.
This seamless and powerful integration comes at a cost, however, that of the additional licensing fee for AMC. Integrating with another best-of-breed marketing solution like Marketo or Pardot is not a good option for AEM, since the integration will not be as seamless and there isn’t a significant gain in functionality over AMC. In contrast, Drupal, as an open source platform, integrates well with every major marketing automation platform, empowering marketing teams to use the tools that work best for their business.
The story with e-commerce capabilities is much the same. Combining a CMS with an e-commerce engine can produce compelling branded and personalized shopping experiences. With the Adobe solutions, product information is easy to import into AEM from Adobe’s commerce engine, and once there, it can be managed like any other piece of content. With Drupal, there are options on how to add e-commerce functionality. One option is to use Drupal’s integration with Magento, a popular, best-of-breed open source solution that has backing and partnership from Acquia. Another option is to use the Drupal Commerce suite of modules (again, open source), which offer comparable e-commerce functionality of its own. Both the Drupal solutions and Adobe’s e-commerce engine can provide the following functionality and more:
It will be covered more in Part 2 of this blog post, but it’s worth briefly mentioning here that customization is a feature that applies to many marketers. As an open platform, Drupal is built to be customized, as evidenced by its thousands of contributed modules, and close to 3000 for Drupal 8 alone. With a proprietary system, the source code is locked, inhibiting the ability to extensively customize.
In addition to evaluating how well a CMS’s features meet functional and nonfunctional requirements, decision-makers need to evaluate the return on investment when making a CMS decision. Drupal and its contributed modules are free, whereas AEM has a substantial licensing cost. The averageAEM deal is estimated in the mid six figures in licensing with a total implementation cost of over $1M USD. Adding other Adobe products such as Adobe Marketing Cloud incurs an additional licensing cost. Organizations considering Adobe will need to calculate when they can expect a return on their licensing investment. Adobe’s website cites some large organizations that have decided to make that investment, including RCS MediaGroup, Franke Group, and Skylark.
Drupal is not without its costs either, in that both Drupal and AEM require costs for implementation and hosting. A key difference, however, is that Drupal has a vast array of hosting options at virtually every price point. If a client chooses to use Acquia’s Lift, there is a subscription fee for that (contact Acquia for pricing). Lift provides a great user experience for content authors comparable to that of AEM, for example the drag-and-drop interface cited above. Lift’s features are well worth the licensing fee for many organizations and are very competitive compared to AEM licensing costs.
Another long-held concern among decision-makers is that of the support and responsiveness of the people behind the software. In the early days of open source CMS’s, decision-makers were more likely willing to invest in proprietary solutions because should something go wrong, they had access to the software development team on the other end of the phone. Over the years at Mediacurrent, we have seen that’s not true with Drupal. Part of it is because, at the lower levels of the technology stack, there are a number of excellent Drupal hosting services who have a proven track record in hosting and servicing highly-scalable, highly-available Drupal solutions. At the higher levels of the stack, Drupal core and its commonly-used contributed modules are fully unit-tested, and any new security vulnerabilities are rapidly responded to by a dedicated security team. This combination of strengths has led to many Drupal success stories for a diverse array of enterprise clients such as The Weather Company, Travelport, MagMutual, and many more.
Stay tuned for Part 2, which will cover the IT perspective and the community perspective on the two CMSs. Part 2 will also provide concluding thoughts on which situations fit these technologies the best.
Additional Resources
10 Reasons Why Marketers are Moving to Drupal | Blog
Using Marketing Automation for Personalization: Benefits and Challenges | Blog
20 Things You Must Know Before Approaching a Web Agency | Blog
DrupalCon Baltimore here we come! We are Gold sponsors once again this year and there are a ton of ways to connect with our team. Here are some of the highlights:
This year will be my first time attending DrupalCon (as I am new to the Mediacurrent team & Drupal)! I hope to attend as many sessions as I can, including the 7 sessions my teammates will be leading. Our sessions this year include:
Not a coder? Not a problem - no coding required. In this session, Director of Development, Jay Callicott & Senior Drupal Developer Dan Polant will walk you through step by step how to configure a multilingual site, referencing case study examples from VisitTheUSA.com and Habitat.org.
In this session, Security Lead Mark Shropshire will present about Guardr's philosophy, features, and how to start new projects with Guardr. Let's raise the bar on Drupal security with a more streamlined approach.
Senior Front End Developer Chris Doherty & Accessibility Lead Carie Fisher will walk through the workflow of transforming an 'idea on paper' into a living styleguide, how having a styleguide benefits all involved in the process, from designer to client, from the beginning to the end of a project. Read more about Chris and Carie's session on how to master the handoff from design to development.
International SEO is still incredibly complicated and getting search engines to serve language and country specific content can feel a little like witchcraft. Lucky for you, Drupal 8 is naturally search engine friendly and with a little planning, Senior Digital Strategist Jen Slemp & VP of Marketing for Lingotek Calvin Scharff will have you getting rankings in no time. Read more about Jen's & Calvin’s SEO session.
Omnichannel. IOT. Big Data. Personalization. Digital Transformation. If written today, PT Barnum’s famous phrase might have read, “There’s a buzzword born every minute.” But what do these phrases and trends really mean to your organization? Join Director of Client Services Shellie Hutchens for a game of Buzzword Bingo where she plans to give you our take on these and other popular, oft-used phrases consuming the Marketing, Digital, and Business industries today. Read more about what you'll learn about Digital Transformation and Drupal from Shellie's session.
In order to continue to grow your career with Drupal, it’s important to understand where the digital, web design and development marketplace is going - and where Drupal fits into it. Who better to explore in this endeavor than those organizations who are trying to solve the most complex digital, IT and marketing challenges on a global scale: The Large Enterprise. This session is less about Gartner rankings and CMS comparisons, and more about you gaining an understanding how Drupal now fits into a global organization’s product roadmap in 2017 and beyond. After all, it’s not just an ECMS anymore!
The Weather Company (an IBM business), in partnership with Mediacurrent, has an incredible track record of not just keeping up with innovation, but driving it. When weather.com originally launched on Drupal back in 2014, we set a new standard for just how high a trafficked site could be hosted on Drupal. We built a progressively decoupled site well before they became a hot topic in the Drupal world. But we didn’t stop there.
This session will explore the history and ongoing efforts at driving innovation, as well as the business value of investing in the exploration of emerging technologies.
The best way to connect with the Mediacurrent team is to visit us at our booth, #224. We have a great spot on the exhibition floor and plenty of room, so we are amping up the fun this year. Pick our brains, pick up a t-shirt & a sticker, or grab your after-party invitation.
We are bringing more from our team this year than any time before! We are all looking forward to trading stories, sharing ideas, and learning something new along the way. We’ll be taking turns at the booth, but if you want to speak with someone specific or talk shop someplace quieter than the exhibition floor, just schedule a one-on-one conversation.
Finally, Mediacurrent and Lingotek are teaming up for the fourth year in a row to host an After-Party on Tuesday, 7pm-11pm at Pratt Street Ale House. There will be drinks, food, and photobooth fun! Stop by either booth for an invitation and remember to bring your Drupalcon badge for entry.
Stay tuned to our youtube channel for a Friday 5 episode for 5 ways to connect with our team at Drupalcon
Additional Resources
Mediacurrent to Present Seven Sessions at DrupalCon Baltimore | Blog
5 Ways to Connect at DrupalCon Baltimore | Blog
What exactly does it mean to be a 'front-end developer'? From company to company? From website to website? Really, from day to day? Does your colleague actually do the same job as you? For example, some days I’m working in InDesign or Photoshop all day, the next I’m writing jQuery or building theme components. The very next day I am writing a blog post, prepping for a presentation, or doing research on the latest trends.
We wear a lot of hats as front-end developers. Depending on the client or company you work for, you may be the designer, UX/UI specialist, site-builder, QA tester, and developer all rolled into one. How can we possibly add the accessibility hat on top of all that...especially when a project does not have a lot of time or budget to include that piece?
“Web accessibility means that people with disabilities can use the Web. More specifically, web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the web and that they can contribute to the web. -" Web Accessibility Initiative (WAI)
Basically, it’s making websites more inclusive to all people.
Everyone! Over 57 million Americans (~20% of the US population) identifies as having some type of disability. These range from severe disabilities to temporary disabilities but does not even count the percentage of the population that either does not identify as having a disability or those populations that could benefit from accessible sites, such as English as a second language and the aging populations. The number of people actually needing accessible websites is far greater than the population who identifies as disabled.
It is the right thing to do - it is important that your website is ‘perceivable’, ‘operable’, ‘understandable’, and ‘robust’(POUR) - all successful criteria for a user experience that provides optimal usability for everyone.
It is a smart business move- The population of the states of California and New York combined is around 57 million users, the same number as those who identify as being disabled. Would you ever create a website that would exclude these two large states? Does that make any sense? Optimizing your website for accessibility opens your site to a wider audience (potential 20%+ increase users), plus it is good for SEO/search bots/Google ranking, etc.
It is the law - Government-funded programs/schools, airlines, and nonprofits are required to follow accessibility guidelines, while private companies/organizations just hope they do not get sued. Earlier this year the Information and Communication Technology (ICT) passed a ruling to update the old 508 rules to be more in line with current WCAG 2.0 guidelines. This ruling is for the public sector, but it is setting up the possibly of regulations in the private sector as well.
Now that we have defined what being a front-end developer is (that it is variable) and what accessibility is (sort of nebulous, but we know it is important), but what is inclusive development? Is that just a fancy word for “developing with accessibility in mind”? Yes and no…
So the term Inclusive Design is not a new one. It is a phrase that has been around since 2005. It’s defined as: “The design of mainstream products and/or services that are accessible to, and usable by, as many people as reasonably possible ... without the need for special adaptation or specialized design.”
Inclusive development is really taking that next step and adhering to inclusive design practices. It is a way of rethinking development in a way that adds value to all users, not just accessibility to some users. It is a shift in the way you approach your thinking about development. So basically, if you target your website for the 25% of your users that have severe difficulties, then it will cover all the additional users with little to no difficulties. Adaptive technologies (AT) help cover the severely disabled users with specialized products. Inclusive development means making something valuable, not just accessible, to as many people as we can. It is about putting “Accessibility First.”
At this point, you might be thinking…design terms, accessible philosophies, actually thinking about my code…I did not sign up for this! Well, I would argue that the way you might feel about accessibility today is much like how we felt when we heard about the “Mobile First” approach way back in 2009. If you remember, when the mobile first approach appeared (where we design/develop for smaller screens first then add more features and content for larger screens), it was a crazy and overwhelming shift in thinking and front-end workflows. Now it is just part of our daily lives as front-end developers. I am not even sure I could make a site that wasn't responsive at this point.
Now it is 2017 and “Accessibility First” may seem just as daunting and impossible...there is so much to know, so many different ideas of what accessibility means, new rules, new tools, etc. But if you have the right tools and attitude, there is hope!
To recap, we are front-end developers and we care about accessibility, maybe we even think about accessibility first…now what? How does component driven development play a role in all of this?
I bet a lot of people by now have used component driven development in their front-end workflow process. Even if you have not formally heard the term or use the tools, I am betting you are already doing it to some degree already…it is about breaking a large site down into manageable pieces. Much like building a house (out of legos or real materials), you need to build one piece of your house at a time…first the foundation, then the skeleton structure, walls, windows, roof, and everything in between. Component driven development tools allow us to do this, but for websites.
Component driven development helps break the site down into manageable components, so there is less development time with these reusable components. It allows front-end and back-end developers to work simultaneously. And clients love it because they can preview the build process and can use living style guide as a reference after the site has launched.
One of the first questions I asked at the beginning of this article is: how do we add accessibility to a project does not have a lot of time or budget to include that piece? Well, one way we can tackle these issues is by using an accessible component driven approach. By thinking about inclusiveness from the start, we can get a head start on accessibility while still building the required site components.
The A11Y style guide was formed out of front-end development workflows, aided by the component driven development tool KSS node, and fueled by the conviction that everyone deserves to be able to use and contribute back to the wonderfully wacky web.
The A11Y style guide is ultimately a style guide that comes with pre-populated accessible components that include helpful links to related tools, articles, and WCAG guidelines to make your site more inclusive. You can use it as a reference, as a base for your own style guide or accessible components. You may even decide to create a new accessible theme based on the guides.
The concept of the A11Y style guide is really simple. I did not reinvent the wheel with this project. This tool builds on all the wonderful work that is already being done in the world of accessibility and makes that existing knowledge base more applicable to real-world scenarios in a condensed manner. I lovingly think of it as the “CliffsNotes” of accessibility or a nicer term may be “accessible pattern library” for front-end developers.
But by using a reusable, accessible, component driven approach and thinking about inclusiveness from the start, we can get a head start on building accessible websites. So ultimately you and your clients save time and money, plus your site is a little more inclusive…what’s not to love?
Additional Resources
6 Design Alternatives to Avoid Slideshows | Blog
Accessibility Laws and Guidelines | Webinar
5 Ways to Sneak Accessibility into your Next Design | Blog
Building Accessibility into a Website from the Start | Blog
I know quite a few developers that love Vim but think they have to use an IDE to be able to debug their applications. In this post I will show you how to set up Vim as a Xdebug client.
The Vdebug plugin for Vim provides a way to do debugging inside Vim using the tools that Vim provides to great advantage. As the project page says,
Vdebug is a new, fast, powerful debugger client for Vim. It's multi-language, and has been tested with PHP, Python, Ruby, Perl, Tcl and NodeJS. It interfaces with any debugger that faithfully uses the DBGP protocol, such as Xdebug for PHP.
In this post we will focus on PHP, and more specifically, Drupal.
The first step we must take is to confirm you have Xdebug working in your application. If you don't have Xdebug installed head over to their site and follow the installation guide. You don't have to do anything special in order to use Vdebug with your PHP application. If you are currently using another debugging client like the one built into PHPStorm you shouldn't have to change anything in your PHP configuration. But, for sake of completeness here is an example Xdebug configuration we (Mediacurrent) use on all of our projects. We have standardized our local development work on the great Drupal VM project.
[XDebug]
zend_extension="/usr/lib/php5/modules/xdebug.so"
xdebug.coverage_enable=0
xdebug.default_enable=0
xdebug.remote_enable=1
xdebug.remote_connect_back=1
xdebug.remote_host=10.0.2.2
xdebug.remote_port=9000
xdebug.remote_handler=dbgp
xdebug.remote_log=/tmp/xdebug.log
xdebug.remote_autostart=false
xdebug.idekey="PHPSTORM"
xdebug.max_nesting_level=256
The next step is to set up your browser to allow you to easily enable and disable Xdebug. To do this, I use the Xdebug helper for Chrome. It's much easier than setting up POST variables and cookies for each browser session. You will want to make sure your `idekey` is set to match what you are using in xdebug on the server. In my case, that is PHPSTORM.
Now that we have Xdebug all squared away,let's learn about Vdebug. Just what can Vdebug allow you to do?
If you don't already use a plugin manager for Vim I recommend you check out Vundle, but the way you install Vdebug has no affect on how you use it. I recommend that you follow the installation instructions on the Vdebug project page to get it set up.
For your convenience, I have listed the default key bindings for Vdebug below (I have changed the defaults in my own vimrc since I find them to be somewhat difficult to remember).
<F5>: start/run (to next breakpoint/end of script)
<F2>: step over
<F3>: step into
<F4>: step out
<F6>: stop debugging (kills script)
<F7>: detach script from debugger
<F9>: run to cursor
<F10>: toggle line breakpoint
<F11>: show context variables (e.g. after "eval")
<F12>: evaluate variable under cursor
:Breakpoint <type> <args>: set a breakpoint of any type (see :help VdebugBreakpoints)
:VdebugEval <code>: evaluate some code and display the result
<Leader>e: evaluate the expression under visual highlight and display the result
Note: during this screen cast I am using my own key bindings which I outline later in this post.
If you only ever debug using your machine's native version of PHP you can skip this step. But if you need to connect to a server running your code (Vagrant or Remote), you will need to do one more step to set up Vdebug. You need to tell Vdebug where your files are on your local file system and the server's files system. The reason you need to do this is because the file paths on your local machine and the server are most likely different. Let's look at an example.
" Mapping '/remote/path' : '/local/path'
let g:vdebug_options['path_maps'] = {
\ '/var/www/drupalvm/web' : '/Users/kepford/Sites/someproject/web',
\ '/home/vagrant/docroot' : '/Users/kepford/Sites/anotherproject/docroot'
\}
The first portion of the mapping is for the remote path /var/www/drupalvm/web
. The second is for the local path /Users/kepford/Sites/someproject/web
. That's all you have to do for remote debugging.
Now that we have everything set up let's talk about custom configuration. The default keybindings that ship with Vdebug may not suit your preferences. Personally, I try to come up with mnemonics for each Vim command and struggle to remember the correct function keys for Vdebug. The good news is it's easy to override the defaults in your .vimrc file. Here's an example of what I have done with my defaults.
let g:vdebug_keymap = {
\ "run" : "<Leader>/",
\ "run_to_cursor" : "<Down>",
\ "step_over" : "<Up>",
\ "step_into" : "<Left>",
\ "step_out" : "<Right>",
\ "close" : "q",
\ "detach" : "<F7>",
\ "set_breakpoint" : "<Leader>s",
\ "eval_visual" : "<Leader>e"
\}
Note: I have my <Leader>
key set to be the space bar and find this is very productive.
You may not like my key mappings but they work for me. The rest of my configuration is fairly straight forward.
" Allows Vdebug to bind to all interfaces.
let g:vdebug_options = {}
" Stops execution at the first line.
let g:vdebug_options['break_on_open'] = 1
let g:vdebug_options['max_children'] = 128
" Use the compact window layout.
let g:vdebug_options['watch_window_style'] = 'compact'" Because it's the company default.
let g:vdebug_options['ide_key'] = 'PHPSTORM'" Need to set as empty for this to work with Vagrant boxes.
let g:vdebug_options['server'] = ""
If you encounter a problem getting Vdebug to work it's likely that you need to change a setting. But before you do that you should turn on debugging. Run the follow two commands to dump the output of the Vdebug log to a file.
:VdebugOpt debug_file ~/vdebug.log
:VdebugOpt debug_file_level 2
Then tail the output of this file like so, tail -f ~/vdebug.log
as you start Vdebug and try to use it.
As with any good Vim plugin you can learn a lot more about how to use it by reading the help documentation at :help Vdebug
.
Additional Resources
Loading and Rendering Modal Forms in Drupal 8 |Blog
Flag Migrations in Drupal 8 | Blog
Migration with Custom Values in Drupal 8 | Blog
Recorded April 12th, 2017
Yes once again, folks, the wonderful DrupalConNA is happening in shiny Baltimore MD, and we have special guest, Heather Rodriguez (@hrodrig) from Civic Actions joins us to talk about the events of Drupalcon and to talk about her talk on Conquering Imposter's Syndrome. As always, we have some Drupal News, a couple of Pro Project Picks, and with Ryan unable to join us, Bob takes over the Final Bell.
Episode 31 Audio Download Link
Show notes and past episodes available at http://www.mediacurrent.com/dropcast.
If you want to be a guest on this fine show, you can sign up at https://www.mediacurrent.com/dropcast/be-a-guest
Mediacurrent is hiring. Go to mediacurrent.com/dropcastjobs
Email your questions to: dropcast@mediacurrent.com
Friday 5: http://mediacurrent.com/friday5
Most if not all of our articles come from TheWeeklyDrop. The best Drupal news and links delivered to your inbox every week, which we turn around and steal.
Tell us about yourself.
How did you get started with Drupal?
Tell us about your upcoming session at Drupalcon.
What was the outcome of last year’s BADCamp FE Summit?
Are there plans to do it again this year?
Bob: Configuration Split
Heather: Config partial export
Mario: Twig Tweak
Jim Birch emailed us like he always does. MIDCamp was a success again.
DrupalCon - April 25 - 28th in Baltimore (Lots of peoples)
Drupal Camp Utah - May 12th in Sandy UT (Mark can’t go)
DinosaurJS in Denver on June 15th & 16th, 2017 at Space Gallery
Asheville Drupal Camp - July 14 - 15 in Asheville NC
GovCon - July 31 - Aug 2nd - Washington DC (ish)
BADCamp - October 18 - 21 - UC Berkeley
The first week of April I was among the attendees at the year’s largest US Angular conference, ngconf, in Salt Lake City, Utah. As one of the keynote speakers last year, I was excited at the opportunity to attend again, and curious about how different the atmosphere would be this time around. Last year the community was anxiously awaiting the release of Angular 2.0, and this year 4.0 was released just before the conference. Sidestepping the understandable confusion about the updated release cycles, it was the atmosphere in the community and the maturity of both the core framework and surrounding tooling that I was most interested in exploring.
This year’s day 1 keynote was delivered by Stephen Fluin and Igor Minar from the core team at Google, and much of it was focused on updates in the wider ecosystem: Angular Material, the CLI, IDE support, and Ionic were specifically highlighted. Each of these was in a fledgling or non-existent state during last year’s conference, but significant progress has been made on them since.
Especially exciting for the Drupal world, though, was the keynote case study presented by Brian Martin of NBA.com, which involved a progressively decoupled Drupal site built with Angular. Brian spoke of the shared programming principles between the two, the simplicity of integrating Angular with Drupal blocks, and more. NBA.com is a fascinating example of a large scale architecture built along lines with which we at Mediacurrent are very familiar, and it was great to see Drupal featured so prominently in the keynote for the second year running.
Further focus in the keynote was given to their new release cycle, where semantic versioning is being combined with scheduled releases that include potential breaking changes in a new major version release every six months. The team is working hard to make upgrading both immediately beneficial and painless, even across major versions, much like the direction one of Dries’ latest blog posts suggests for Drupal. However, they also recognize that not all sites can easily be kept up to date with each release, so the big announcement in the keynote was that Angular 4.x would be a LTS release, with critical bugs and security fixes continuing until 6.x is released.
Several other sessions I attended during the conference are worth expounding on, but for brevity, I will try to hit the highlights. Angular Universal, the framework’s isomorphic or server-side rendering story, is one I’ve spent a lot of time investigating. Since it was moved from community run into the core of the framework its maturity has significantly improved. Jeff Cross’ session on it showed the SEO, social sharing, and performance benefits it can provide, but for a wider understanding of the vision behind Universal, I highly recommend this post by Wassim Chegham. Another highlight was Sean Larkin’s workshop on Webpack, which walked us through creating our first webpack plugin from scratch while helping the audience better understand the thinking behind its architecture.
More big news came on the conference’s final day, when Brad Green and Rob Wormald delivered the second keynote. First announced was TypeScript’s adoption as an officially sanctioned language at Google, a big deal for a company that had previously only condoned the use of C/C++, Java, Python, Rust, JavaScript and Go. TypeScript is being widely embraced across frameworks and companies now, and the strongly typed superset of JavaScript deserves all the attention it has been receiving for the numerous benefits to developer experience and application stability it provides.
But even bigger news, perhaps, was Google’s announced commitment to open sourcing and improving usability around the build tools that allow all of Google’s internal Angular products to always run on the latest master branch. The ABC initiative, for Angular, Bazel, and Closure, has the potential to vastly simplify the use of Angular by enterprise clients, while increasing the performance, velocity and stability of their products. While this commitment is still in the design phase at this point, it points to just how seriously Google is taking improving the community’s ability to scale their apps while keeping up with where the framework is headed.
Initially, as a Drupal architect, I was drawn to Angular primarily by its familiar use of services and Dependency Injection, as well as a curiosity about its use of TypeScript. But much as I have found in the Drupal world, it is the inviting and passionate energy of the community that makes being involved continue to be worthwhile.
The Saturday after the conference, I was invited to participate in a Contributor’s Day event that brought together the core team and some prominent members of the community to discuss ways that Google and the community can better support each other. This ended up being a fascinating and eventful enough discussion that I’ve decided to write a separate post about it, which will be published soon.
Additional Resources
Presentation Framework Generalization, Toward Javascript and Agnosticism | Blog Post
Decoupling in Drupal 8 Based on a Proven Model| Blog Post
Decoupled Blocks with Drupal 8 and Javascript | Blog Post
On Saturday, April 9th, as most of the attendees of ngconf were catching flights home, a group of the core Angular team from Google and prominent community members came together for a Contributor’s Day discussion organized by ThisDot. The stated goal was to discuss areas where the core team and community could better support each other and help advance adoption and ease of entry into using Angular. It was a privilege to join such an esteemed group of open source enthusiasts, and as it turned out, there was a lot of interest in the perspective I brought with me from my time organizing in the Drupal community.
The makeup of the room was ideal: several JavaScript & Angular trainers and consultants were in attendance, some folks representing large enterprise sites, as well as people with significant experience in other open source communities like Ember, Apollo / GraphQL, and of course Drupal. The Google team was represented by Igor Minar, Stephen Fluin, Rob Wormald and Alex Eagle.
The list of topics we covered was determined by a quick brainstorming session at the beginning of the day, and included numerous topics around cultural enhancements: increasing community support, improving documentation, reaching new people, attracting other communities, and supporting the core team’s efforts to drive the web forward. Other topics advanced specific technical discussions like improving the CLI, easing enterprise build pain points, standardizing tools and debugging, simplifying the upgrade path from AngularJS (1.x), and supporting the use case of Angular serving as a “meta-framework” for content and component management systems.
On the technical side, the “meta-framework” discussion was especially interesting coming from the CMS perspective. Justin Schwartzenberger of AngularAir spoke about the Angular Playground tool, that allows you to develop, test, and render your components in isolation, similar to React Storybook. This works under the hood by leveraging a dynamic component bootstrapping and rendering app that is quite similar in principle to the model showcased last year in the Decoupled Blocks module. Rob Wormald has also been spending some time experimenting with ways to build upon this model towards what he refers to as an Application Management System or AMS.
Along similar lines, there was a lot of interest in looking at ways to incorporate using the Intermediate Representation metadata generated by Angular’s compiler to fuel dynamic rendering as well, similar to the insanely cool Minecraft Angular application visualization tool demoed earlier in the week by Minko Gechev. While the practical need to walk around a garden of your application’s component structures in VR might be questionable, it was an entertaining way to show the power of the abstractions provided by the compiler, perhaps more usefully demonstrated with reverse engineering tools like ngrev.
The most exciting news coming out of Contributor’s Day, however, was really back on the cultural side. Towards the beginning of the day one of the topic points raised was how the core team and community leadership could improve support for the wider community and help the community grow. I framed the question to the group of how, once this Contributor’s Day discussion was over, we could formally continue to push the topic items and rough vision we were establishing forward. It was mentioned that there was a Slack group organized by Maxim Salnikov, ngCommunity, where Angular community organizers were encouraged to collaborate.
While the ngCommunity Slack is a valuable tool, as the conversation moved onto other topics I had a lingering thought that kept nagging at me. An invite-only Slack was not sufficient for building a global open source community. Through my lens as a Drupal organizer and community member, I’d noticed an important difference with how the Angular world was organized that I’d never really voiced or put into words before. But suddenly it hit me: heretofore in Angular, there was the core Google team and there was broadly ‘the community’, but the community itself had no formal structure. Conferences and meetups are independently organized and run, and there is no formal network of support for local organizers.
We continued through our topics list with lots of fascinating discussions, but I knew there was more to say on the community building piece. Luckily, near the end of the day, Igor spoke up on the importance of coming back to the topic. I took the opportunity to explain how the Drupal Association was formed with the specific intention of being the central group tasked with nurturing and growing the global community, and how a similar organization might benefit the Angular world. Everyone was on board with the idea, and Uri Goldshtein volunteered to help coordinate its creation.
In the weeks since Contributor’s Day, I’m happy to report that Uri, Maxim and myself have continued to push this idea forward, working with the core team and other community leaders to build out a vision for greater community support and growth. Although very much still in the vision building and planning stages, we are excited at the potential we see for unifying the community in the months ahead. For me, the instant appeal such an idea had is also very much a testament to the uniquely awesome community we have built in the Drupal world.
There was another recurring topic during Contributor’s Day around some of the great ideas coming out of the Ember community. With EmberConf taking place just before ngconf, and representatives of Ember in the room, this cross pollination was fascinating enough that I’ve decided to write a separate post about it, which will be published soon.
Additional Resources
Build with us - ngconf 2017 recap | Blog
Building wunderground.com with Drupal and Angular 2 | Blog
Decoupling in Drupal 8 Based on a Proven Model | Blog
Guardr is a Drupal distribution with a combination of modules and settings to enhance a Drupal application's security and availability to meet enterprise security requirements. These security requirements have been added after a review and study of industry best practices from security standards, regulatory controls, and security certifications. These include but are not limited to:
Guardr's philosophy is based around the CIA Information Security Triad where confidentiality, integrity, and availability are held in high regard.
For any information system to serve its purpose, the information must be available when it is needed. This means that the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly. High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades.
In addition, Guardr maintainers are always on the lookout for modules and settings that will harden the security of Drupal by protecting against risks detailed by OWASP in the "OWASP Top 10 Most Critical Web Application Security Risks." Some of these risks are ones that the Drupal community witnesses with the release of Drupal Security Advisories.
While the Drupal 7 version of Guardr has been available for 5 years, I am pleased to announce the first alpha release of Guardr for Drupal 8: Guardr 8.x-1.0-alpha1. Drupal 8 Core has a number of built-in security enhancements that help websites and applications maintain security and availability. Guardr builds on top of Drupal 8’s foundation by adding Core hardening configurations via Guardr Core Included Drupal 8 contrib modules extend site security through improved login security, session management, system auditing and logging, and other features.
Below are the items the Guardr community sees as next steps to help drive Guardr for Drupal 8 to a stable release:
I had the pleasure of presenting Raising The Security Bar with Guardr at DrupalCon Baltimore. There are more details on the project and great Q&A at the end of the session.
We would love your help! If you are interested in contributing to Guardr, we have needs which include writing documentation, supporting Guardr users, testing patches and updates, and developing new features. Getting involved in the issue queue is a great place to start. If you want to chat about how to help, feel free to ask questions in IRC at “#drupal-guardr” or Tweet us at @guardrproject.
Additional Resources
10 Great Security Podcasts, Blogs, & Resources | Blog
Evaluating the Security of Drupal Contrib Modules | Blog
Best Practices for Drupal Site Security | Blog
This is Part 2 of 2 of my Drupal vs Adobe Experience Manager (AEM) blog post. In the first part, I've compared the two from the perspectives of content authoring, marketing, and business. In this part, I look at the two from an IT and community perspective. I also need to repeat my disclaimer that I'm a long-time Drupalist, but in this blog post I endeavor to be even-minded and objective.
Information Technology (IT) covers a wide area of different disciplines. The disciplines discussed in this comparison are development, configuration, and nonfunctional considerations.
This comparison further breaks down development discipline into back end and front end development. Back end development refers to custom development done to change the functionality of an application, whereas front end development pertains to the look of the application -- colors, fonts, layout, etc.
Skillswise, developing on top of AEM requires proficiency in the following programming languages: HTML, CSS, HTL, Javascript, and Java. Developing in Drupal also requires knowledge of HTML, CSS, and Javascript, but uses object-oriented PHP instead of Java. For Drupal, it's helpful to know the basics of the Symfony framework and the dependency injection design pattern. Also for Drupal, knowledge of Twig is required for theming, and knowledge of YAML is required for configuration of themes, modules, and distributions.
The heart of any content management system is, of course, the content, and how it's stored, created, accessed, modified, and removed. AEM and Drupal store their content differently.
AEM stores its content in files. The files are wrapped in a repository called Adobe CRX, which in turn is accessed by a variation of the web application framework Apache Sling. Thus, a developer needing to manage AEM content can do so programmatically by interfacing with Sling.
"Because this content is more structured than in files, more complex retrieval and management operations can be performed on it."
Drupal, in contrast, stores its content in a database. Though Drupal can interact with one of a number of different databases, MySQL is the most commonly used. Drupal wraps the database with components of the Symfony framework. Because this content is more structured than in files, more complex retrieval and management operations can be performed on it.
At the more abstract level, the data in both CMSs can be accessed programmatically through their frameworks, but their frameworks work differently. Much of AEM's activity is through HTTP GET and POST requests. A lot of information is packed into the url when accessing or managing a content resource, including the content resource type (e.g. wiki, image, etc.), the content resource itself, what script to run on it, and parameters the script needs. Sling parses this information and operates on it. Drupalprocesses its pages through HTTP GET and POST requests as well, using a Router-Controller-View process, where Router, Controller, and View are all executable components of the Symfony framework.
AEM can be extended with application-specific modules. Sling uses the OSGI (Open Services Gateway initiative) framework to allow for new packages of code, known as bundles, to extend the framework. The framework gives the developer easy access to perform such operations as install, uninstall, start, stop, update, and more. Adobe offers a Marketing Cloud Exchange where users can buy and download any from over 300 extensions.
Adobe Marketing Cloud Exchange
Developers can extend Drupal through modulesaswell, although the process is different. Modules can hook into Drupal's Router-Controller-View process. A developer can add their own routing file and controller to extend or even override Drupal's functionality or alter content. At this writing, there are over a whopping 3200(!) open source, downloadable modules for the current version of Drupal alone, and an even more whopping 20,000 modules available for the total of all versions. Project teams with a need to extend Drupal in a unique way are likely to find a module that already performs the extension for them.
For custom development for the front end, AEM uses HTL (HTML Template Language). HTL is JSP-like in that it mixes HTML and special directives. It is secure and does not require knowledge of Java to code in it. HTL is more secure than JSP because HTL automatically applies the context-aware escaping to all variables being output to the presentation layer. For example, expressions placed in href or src attributes are escaped differently than from expressions placed in other attributes. The same must be done manually in JSP, leaving the opportunity for human error allowing a cross-site scripting (XSS) vulnerability. Further, evaluation of the expressions and data attributes is done server-side and is not visible client-side. Any JavaScript framework can thus be used without interfering.
This all makes for a nice separation of concerns, where developers with different skill sets can work more easily in parallel.
<div>
<sly data-sly-test="${properties.jcr:title &&
properties.jcr:description}">
${properties.jcr:title}
${properties.jcr:description}
</sly>
</div>
HTL Example, from Adobe’s online documentation. Standard HTML tags are in red, HTL expressions are in green, and the special sly directive is in blue, which executes the expressions but does not appear in the HTML output.
Drupal employs Twig for front end development, which is a template engine for PHP and is part of the Symfony framework. Twig is also the name used for the engine's compiled templating language. When a web page renders, the Twig engine takes the template and converts it into a 'compiled' PHP template The compilation is done once. Template files are cached for reuse and are recompiled on clearing the Twig cache. With Twig the developer can define regions, include CSS and Javascript files, subtheme an already-existing theme, pass in various kinds of attributes (including content), and easily manipulate images. Just like with modules, each theme uses a YAML file to provide metadata about the theme to Drupal. Examples of such metadata include theme name, theme description, which group of themes the theme belongs in (also known as "package"), Drupal core version that the theme is compatible with, and much more.
<h1>{{ pageTitle }}</h1>
<div class="row">
{% for product in products %}
<div class="span4">
<h2>{{ product }}</h2>
</div>
Twig example from KNP University. Standard HTML tags are in red, Twig expressions are in green, and control statements are in purple
Configuration is the ability for developers and administrators to non-programmatically modify the behavior, not content, of the system. Examples of configuration include setting user permissions, defining content structures, and setting up taxonomies.
AEM allows the administrative user a number of places in which to customize the behavior of AEM without custom development. One place is the Adobe CQ Web Console, which is where OSGi bundles and services are configured. A subset of OSGi configurations are available in the repository and can be modified there. This ensures that copying repository contents will result in identical configurations. A few configuration files are in the file system itself and are editable. Finally, some configurations can be performed within AEM using its Tools console. Various functionality is configurable out of box, including classic vs. touch-optimized UI, version purging, logging, run mode (e.g. publish vs test vs development), defining redirects, replication between environments, LDAP, and much more.
Adobe CQ Web Console Configuration
Drupal's configuration resides in both its database and in dedicated configuration files. The database stores configuration for most of its functionality, including block management, content types, users, roles, permissions, menus, taxonomies, Views, module management, search, web services, workflow, and a great deal more. Further, database configuration can be managed from a few different endpoints, including the Drupal UI, Drush, and Drupal Console.
Dedicated YAML configuration files exist for modules, themes, and distributions. These configuration files are highly flexible. One can edit them based on one's needs, and the only restriction to editing is the YAML syntax and format itself.
A small part of Drupal's configuration UI
"Drupal stores site configuration data in a consistent manner: everything from the list of enabled modules, through to content types, taxonomy vocabularies, fields, and views."
Drupal stores site configuration data in a consistent manner: everything from the list of enabled modules, through to content types, taxonomy vocabularies, fields, and views. Configuration is stored in Drupal's database and is exportable to YAML files, making it easy to move configuration between environments, for example, development, test, and production environments. The recipient environment can easily import the exported-to-file configuration from another environment. Also, because site configuration is exportable to files, it can be stored as part of a project's codebase and is thus version-controllable.
Configuration management in AEM is simpler, in that all configuration is stored in files by default. Thus, no export nor import tasks need to be performed.
The term “nonfunctional” pertains to any non-user-facing quality of an application that warrants the attention and planning of an IT staff, including security, performance, and scalability.
Regarding security, like most CMS's, AEM manages user access through users, groups, access control lists, and control over what content can be created, modified, updated, and deleted, and by whom. AEM provides a Security Console to manage all user access. AEM documentation provides a checklist for project teams to follow to ensure site security. Also, as mentioned earlier, AEM's theme language HTL prevents XSS vulnerability by design. One area of vulnerability is in the Sling component, where unauthorized access to AEM’s content repository is possible if developers don’t follow best practices.
Drupal manages user access through users, roles, and permissions. Drupal provides an administrator fine-grained control over what content can be created, modified, updated, and deleted, and by whom. More than this, Drupal is designed with security in mind, and the worldwide Drupal community supports a dedicated security team to ensure rapid response to issues. Adherence to Drupal’s coding standards also safeguards against security risks.
Regarding performance and scalability, AEM works primarily with files, be they content, javascript, CSS, etc. AEM can thus take well-established performance and scalability measures such as caching, load balancing, and use of content delivery networks (CDNs). Adobe has written guidelines for optimizing performance for AEM, but they certainly can be applied to many other web solutions.
"Drupal has proven itself as highly scalable with such sites as The Weather Channel, GE, and the Grammy Awards."
Drupal can make use of these measures as well, but having the capacity to manage more highly-structured data introduces scenarios that require frequent database interactions, like user-generated content on a high traffic site. Without proper planning, excessive database activity introduces performance bottlenecks and can limit scalability. The Drupal community has written guidelines for managing this, and indeed, Drupal has proven itself as highly scalable with such sites as The Weather Channel, GE, and the Grammy Awards.
Though some of AEM's components are open source, in the end it is a proprietary product. Though there are user forums for AEM developers to support one another, support is best delivered by Adobe itself. Some clients like to know that there's someone "at the other end of the phone" should something go wrong. However, at Mediacurrent, we're seeing less of this because of the now widespread acceptance and success of Drupal in the enterprise space.
An open source project with a community as large as Drupal's ensures the platform’s long term viability. Drupal’s community is enormous, with over one million participants worldwide from all skills and backgrounds, having contributing thousands of modules implementing almost any functionality one might want. With its depth and breadth of global engagement, Drupal is not going away anytime soon.
Drupal’s enormous, thriving community ensures a top tier platform for years to come. (From the Drupal Association)
Another benefit of having a large community is that the project is constantly undergoing a vast peer review, with countless independent developers working on the project worldwide. The community also supports a dedicated Drupal security team to immediately respond to new threats to keep the project safe. Also importantly, the community's ethos of teamwork and volunteerism drives constant innovation, keeping Drupal on the vanguard of Web CMSs and keeping it likely to integrate with whatever new trending technologies emerge.
Finally, a word about the people themselves. Those who would volunteer their time to contribute to the project tend to give of themselves in other areas of society as well, be it participating in their community or giving time and energy to nonprofits. And that's a special group of people to be around.
I've compared Drupal and AEM from several perspectives, including content authoring, marketing, business, IT, and community. There are some similarities between the two, but each has its differentiators. AEM provides an excellent user experience for content authors and marketers, whereas Drupal is more powerful and flexible in its ability to manage complex, structured content. Drupal and its contributed modules are free, whereas AEM has a substantial licensing cost and forces the buyer into vendor lock-in. AEM’s configuration management and scalability approaches are simpler, though Drupal has more than proven itself at scale. A tremendous advantage for Drupal is its community, which provides almost any extension imaginable, provides instant mutual support, and immediately responds to any new security threat. Despite my pro-Drupal bias, I hope this two-part blog post has compared these technologies in an even-minded way. I encourage the reader to explore and evaluate both of these further and to reach out to Mediacurrent when exploring Drupal particularly.
Additional Resources
Comparing Drupal and Adobe Experience Manager, Part 1 of 2 | Blog
Today's consumer expects a tailored web experience every time they engage with your brand. In fact, 86% of consumers will pay 25% more for better personalization.
At a time when 'one-size-fits-all' strategy falls short of demand, business leaders must ask the question, does my website keep it relevant?
In our new ebook, we will explore the options for creating personalized web experiences with Drupal as the backbone for your digital strategy.
Topics include:
Jason Want is a Lead Drupal Architect at Mediacurrent. An Acquia Certified Developer with over six years fo Drupal experience, he is a co-organizer of Drupalcamp New Orleans and regularly presents at monthly Louisiana Drupal user meetups. This ebook was inspired by his recent presentation, Marketing Automation and Web Personalization with Drupal from Drupalcamp Atlanta.
In her role as Marketing Content Strategist, Tara Arnold champions Mediacurrent’s “culture of content,” supporting the multi- channel strategy, development, and promotion of Mediacurrent’s thought leadership resources.
Additional Resources
Personalization: The 'Hey, Joe!' Experience | Mediacurrent Blog
Using Marketing Automation for Personalization: Benefits and Challenges | Mediacurrent Blog
5 Smart Things You Can Do With Marketing Automation | Friday 5 Video
DrupalCon Baltimore 2017 was, without a doubt, a great event. The first DrupalCon to hit the US East Coast since DrupalCon DC in 2009 so this (ex-) New Englander was happy to have just a day's drive to get there.
As I had missed DrupalCon New Orleans in 2016, I was eager to make the most of this year's event. Given I've been the lead maintainer of the Metatag module for a while it should be no surprise that I focused on SEO-related sessions and BOFs this time around. I also have an interest in internationalization and multilingual functionality, so I made time to attend some related sessions and BOFs.
The person who literally wrote the book(s) on Drupal SEO, Ben Finklea, held a multi-hour mini seminar on improving a Drupal site's SEO rating. Ben stepped through a number of techniques and tools to help, appearing to follow the latest edition of his Drupal book. I stopped by for a few moments and was pleased to see a good crowd in attendance.
As is my tradition, I held a Birds-of-a-Feather, aka "BOF", focused around search engine optimization. The BOF was very well attended, with a few people having to stand as all the seats had been filled already.
We started the discussion on Metatag. I explained how I'd always considered that the release of an an "easy" or "simple" edition of a module to be a sign that the maintainers had failed on their goal of porting a module to a new release of Drupal. upon seeing someone release a module called "Easy Meta" I released Metatag 8.x-1.0 within three days at the end of January, just to get it out the door. I mentioned that the Views integration was a high priority for the next release, which did in fact go into Metatag 8.x-1.1 at the end of May.
One point we discussed was that it could take some effort to properly build out the necessary configuration using Metatag for a site. Ben Finklea joined us and explained that, even as a seasoned user of Drupal 8 and Metatag, it could still take him upwards of eight hours to fully tune each content type, vocabulary, etc on an average site, because of the need to go back ‘n forth between different settings forms to configure everything. I mentioned how I had the same experience, how I had built a spreadsheet to list all meta tags to help plan out how it all should be configured across an entire site, but that after two or three sites it then became almost more effort to go through and copy/paste everything into Metatag. I then shared how I had been working on a huge spreadsheet-like grid UI for managing the default configurations, but that it then lead to other problems, most notably that it could cause PHP to fail due to the number of form fields being submitted. Clearly, further work is necessary to improve upon this, so advice and help would be greatly appreciated.
A few other items came up with regards to Metatag. One item that was discussed was the possibility of being able to have a live-ish preview of what the meta tags would look like before saving changes. Some work was done for that but ultimately I felt it would need to be AJAX-driven and could get complicated to handle, especially with images and other files that could be on the form, but it definitely would be nice to have.
A pain point that some mentioned was that it can be hard to tell where individual meta tag values come from. For example, if a node has an output that contains incorrect information, it can be difficult to tell whether the output is overridden on that individual node, whether it comes from the content type's configuration, the content configuration, or the global configuration. There aren't any current plans to tackle this problem, but I'm definitely open to ideas.
Another idea that was discussed was having a way of uploading files for the various site validation features that some services use, e.g. Google. While many of these prefer something to be added at the DNS level, they still suggest uploading a text file of some sort to help prove ownership over a domain. Right now Metatag has a site validation submodule which supports several services, so it might be useful to have a separate way of uploading the text files into this system, rather than having to go through the global default configuration. There is a separate Site Verify module, but there were plans to deprecate it in favor of using Metatag, but the UI for Site Verify is easier for people who just want to do this limited task.
A final discussion point related to Metatag was in regards to JSON-LD support. Right now there are two sandbox modules available which can add some rudimentary support, though neither one is overly functional and expects the site builder/administrator to have a good deal of knowledge and skill to customize the JSON data correctly. One person I talked to prior to the BOF had suggested leveraging the data from the RDF UI module, which already provides much of this data in an appropriate structure, so it would just take extending the output to match the JSON LD format. This is not something I've personally gotten into yet, but I feel it is the next step in SEO functionality for websites so it is on my list to look at.
Moving past Metatag, we talked about schema.org. A pain point for people who implemented this functionality on their site was that the specifications appear to change with little notice - Google's validation tool would change its formula fairly frequently without any updates being provided on their official blogs. One site handled the schema.org output at the theme level, which worked but is a little clunky, given it is putting business logic in the theme.
One interesting tidbit that came out of the BOF was related to schema.org's data structures. Apparently, they're "very good", to quote one attendee, at expanding the specifications to cover new data models. So, should that need arise, it's apparently worth taking the time to reach out to the schema.org people. Good to know!
In the sitemap world, most of the attendees were using Simple XML Sitemap as the original XML Sitemap module's development had pretty much stopped. The module doesn't (yet) cover every use case, but what is available has been very reliable.
On a related note, the GoogleNews module could use some TLC; it's a news standard that isn't used too much, but some sites have had some good success with it.
One of the recent changes in web performance is the AMP system from Google. Lullabot has put together a step-through guide (https://www.lullabot.com/articles/how-to-try-out-amp-with-drupal) on using a module and theme combination for making a site AMP-capable, and several people mentioned they'd used it with good success. Discussing AMP in general, people lauded its speed for visitors but mentioned content had a really high bounce rate, generally a low engagement, and the few who tried had a difficult time trying to break the visitor out of the AMP page.
During the course of the BOF, an idea sprang up about extending the SEO checklist module to have it work on a per-entity basis. Ben Finklea had been considering this functionality but had not built it out yet. The idea would be to help improve the SEO value of individual entities, e.g. nodes, e.g. to make sure that it has a good URL, that it has image and description meta tags, etc. This sounded like a great idea and hopefully something will come of it.
For a community and software project that is worked on by thousands of people around the world, the ability to localize content and interface is an important requirement. Drupal 8 really shines in this regard and over the week there were many sessions and BOFs dedicated to helping people make the most of it.
On Monday, while all of the training was being held, about twenty of us gathered for the third Community Summit. First held in New Orleans, my first summit was at DrupalCon Dublin where I enjoyed hearing great insights from the wonderful Jenny Wong about how the WordPress community is organized and operates.
Working as a somewhat unconference style, this time around the summit fairly quickly split into two main subgroups - one focused on event organizing and all that it entails, and another on aspects of promoting and supporting Drupal with an international audience.
It may not come as a surprise to many, but while Drupal the software has amazing flexibility for supporting every single language out there, the drupal.org community site itself and related infrastructure pieces are almost exclusively English. Some regional communities have sprung up around the world to help fill in the gaps locally with their own support infrastructure and events, prime examples being Drupal France and Drupal Brasil, both of which have very active communities.
One attendee, Seferiba Salif Soulama who was able to join us thanks to a grant from the Drupal Association, shared his experiences trying to promote Drupal in his French-dominant home country of Burkina Faso, in mid-Western Africa. He talked about the difficulties he had finding training materials in French and mentors from other countries who would be able to help. Seferiba also discussed how he had set up some training programs at local high schools to get students interested in Drupal, and shared how he was making efforts towards getting Drupal considered as a standard platform within the government; he explained that with people from other OSS communities also attempting to get their platform adopted, there is much work to be done.
The last item I'll mention was a BOF focused around internationalization, lead by two people from Lingotek. The small group of about a dozen people shared stories of different projects they'd worked on and some challenges they'd encountered. There definitely was a strong sense amongst the group that there was a lot of difficulty in getting Drupal 7 working right for different scenarios, given how many different approaches there were, and how many modules could be involved. At several times when a question was raised about a specific use case, it was noted that it would be much easier to achieve the end results with Drupal 8 than with the Drupal 7 instance the person had been using.
On Wednesday Mediacurrent's own Jen Slemp and Lingotek's Calvin Scharffs tag-teamed on a short presentation that brought the worlds of internationalization and search engine optimization together for the first time at a DrupalCon conference. Their presentation was packed with good advice, and the recording is well worth taking the time to watch.
As has become the DrupalCon tradition, the finalé was the Trivia Night, with questions once again provided by the tricksters at Drupal Ireland. With most of the Mediacurrent gang already headed home, I joined a group of Acquians which included Wim Leers and Ted Bowman. While the group had already chosen their table name by the time I arrived, Ted suggested that with me on their team they should have named the group "Wim Ears". Because of my infamous rabbit ears... ==:-)
Anyhoo.. both of my topics of interest came up as questions over the night. The first one was around about round three when the night's emcee, the ever eloquent Jeff Eaton, challenged the audience to name one of the two maintainers of the Metatag module, either by their real names or their drupal.org names. Given that Metatag is the fifth most popular module for Drupal 8 I strongly suspect that a large majority of teams got that question correct, and it was an honor to mentioned in a question! Incidentally, the correct answers were either "Damien McKenna" or "Dave Reid"; alternatively, our drupal.org usernames would have been "DamienMcKenna" or "Dave Reid" - as you can tell, we're an original bunch.
The second question of interest had to do with the number of people who were credited as being part of the internationalization initiative for Drupal 8. Having completed the final piece of the initiative – supporting translation of all configuration entities– the initiative's leader Gábor Hojtsy opened an issue to credit every single person who had been involved at one point or another. Which Alex Pott then scripted. Which subsequently crashed drupal.org for a few minutes. Well, this is what happens when you try to credit 1,661 people. Which, incidentally, was the correct answer. However, what most people remembered was the mentions of "more than 1,600 contributors" in issues, blog posts and tweets that bounced around at the time. Though Mr. Eaton informed the eager attendees that the judges would accept answers to the nearest ten, our motley crew could only count to 1,600. Ah well, next time.
As always DrupalCon Baltimore was an excellent conference and I'm glad I got to go.
I'd like to thank the Drupal Association and the many, many volunteer organizers for running DrupalCon, the presenters and BOF leaders, Mediacurrent for providing the funding to let me attend, and my wonderful family who made the trip with me and did some sightseeing in Baltimore while I was geeking-it-up.
Happy Friday everyone, and a happy 4th of July weekend. This week we have April Sides on the program to talk about how much fun Drupal Camps are and what you will take away from them.
Annually, the information technology research firm Gartner publishes its magic quadrant report comparing web content management systems (CMS) at the enterprise level. At this writing, the most recent report places Acquia/Drupal, Adobe Experience Manager (AEM), and Sitecore as the three leaders in the field, based on both their completeness of vision and their ability to execute on organizational requirements.
In response to this report, I’ve recently written a two-partblog post comparing Drupal to Adobe Experience Manager (AEM). Now I turn my attention to Drupal vs. Sitecore.
How two CMSs compare depends largely on the perspective of the type of stakeholder. Stakeholders can include content authors, marketers, developers, decision-makers, and more. Part 1 of this blog series focuses on comparing Drupal and Sitecore from three perspectives: the Content Author’s perspective, the Marketer’s perspective, and the Business perspective. Part 2 will focus on the IT and Community perspectives.
An obvious caveat: as a long-time Drupalist and Mediacurrent employee, I’m a biased observer. However, I endeavor to be objective in this blog post. Indeed, Sitecore has plenty of strengths, as will be explored in the upcoming sections.
In Sitecore, content authoring is handled by two components: Sitecore Experience Platform and Sitecore Experience Accelerator. Sitecore Experience Platform offers two editing tools, the Content Editor and the Experience Editor.
The Content Editor is a fully functional, behind-the-scenes editor that provides fine-grained control over content elements, organized as objects in a content tree. The author can select an item to edit its fields. Note in the image below that the UI is reminiscent of that of Microsoft Windows. This is by design; Sitecore offers an easy usability transition for Windows shops.
Sitecore’s Windows-like Content Editor
Sitecore’s Experience Editor, in contrast, allows for in-place editing. The content author need not leave the page to edit an item.
In-place editing with Sitecore’s Experience Editor. Any item on the page can be made editable.
Drupal offers similar editing options. One option is to go into a full edit mode, as depicted here:
Drupal’s edit mode for a node of content
Any content item can be made WYSIWYG-editable, thusly:
WYSIWYG editing in Drupal
Another option in Drupal is in-place editing, with its Quick Edit module. Specific items are selectable to be edited, as in these two images:
Drupal’s Quick Edit module allows editing in place.
Sitecore’s other content authoring component, Sitecore Experience Accelerator, provides the user an interface to drag and drop various reusable elements onto a page, including text, images, video and Javascript widgets, and more. For those who have read my previous Drupal vs. AEMblog post, a pattern is apparent. All three of the major CMSs I’ve compared are adopting an intuitive, drag and drop interface similar to those of the personal-level site builders like Squarespace or Wix.
Sitecore’s drag and drop authoring interface
On the Drupal side, a combination of Drupal modules can match this functionality. Drag and drop functionality can be achieved either with the Panels module, or by using Acquia’s Lift service and accompanying Lift Connector module.
Acquia Lift’s drag and drop authoring interface
With Sitecore, content is broken up into small pieces, rather than one monolithic body. This allows for more granular control and reuse of content. In Drupal, monolithic content can be broken up with the Paragraphs module, a favorite of Mediacurrent’s. This module enables end users to choose on-the-fly between predefined Paragraph Types independent from one another, where a Paragraph Type is any unit of content (e.g. a text block, image, slideshow, etc.).
A Drupal Paragraphs example, demonstrating paragraph-level editing and control.
Drupal’s Entity Construction Kit module provides alternative means of editing with more granularity and reuse.
At a higher level, a key capability of any CMS is to flexibly display groupings of content items. Sitecore does this via its Search utility. Search terms can be ANDed and ORed together, and filtered down via facets. Search results can be displayed in a number of view types, including list view, image view, and grid view.
Sitecore’s search interface
Search results can be filtered further by a number of elements, including author, field, tag, and more.
Sitecore search filtering
Drupal provides more power and flexibility in the display of groupings of its content because of its combination of highly-structured content and its Views module in core, but at the cost of a steeper learning curve. Beyond basic filtering, Views can also filter on context, for example, who the logged in user is, what parameters are passed into the url, and more. Views can further display a grouping with elements of two or more disparate data sources with a common element, for example, a grouping of articles written by authors from a particular set of newspapers.
Drupal’s Views configuration UI
Layout configuration in Sitecore is straightforward. Though it doesn’t allow for in-page layout editing, it does offers a wide variety of multi-column grid layouts to choose from, with each column being customizable. Sitecore offers a tool called a Splitter to create row regions and customize columns. Page elements are definable to span any number of rows and columns. Any layout is further customizable with CSS.
Sitecore allows the setting of different columns for different devices
Sitecore’s Splitter tool allows for further column and row customizations
Layouts in Drupal can be done by developers for situations that require strict adherence to design mockups. However, Drupal offers a couple of author-controlled layout management as well. The Layout module allows author-controlled layouts across multiple devices by decoupling layouts from developer-created themes.
Drupal’s Layout module gives authors layout control across multiple devices
Drupal’s Display Suite module allows content authors the ability to place fields (e.g. paragraphs, images, etc.) in any desired region of the page, and further select from a number of predefined layouts for any content type.
Drupal’s Display Suite module in action
Content authors have another option with Drupal’s popular Panels module, which allows the user to pick and choose layouts without leaving the page.
Modifying a layout with Drupal’s Panels module
Sitecore’s signature capability is in its marketing functionality. Sitecore allows the setup of marketing campaigns via the tagging of content along with a campaign tracking code for tracking external interactions like email campaigns or traffic from an external web site.
Sitecore campaign setup
Once a campaign is set up, campaign analytics are built in to report on the campaign’s effectiveness, for example, tracking the number of contacts visiting the site, and the level of engagement of the traffic.
Campaign analytics in Sitecore
Another facet of Sitecore’s marketing capabilities is targeted, personalized content. Sitecore uses a rules-based interface to allow marketers to target content to certain users under certain conditions. Personalization is taken one step further with a feature called Engagement Plans, which goes beyond targeting content to employ actions and triggers upon certain conditions being met. Sitecore further maintains Experience Profiles for users, containing information on each user’s devices, online interactions, and even offline activities like stores visited and purchases made.
Sitecore’s personalization interface
To identify overall user behavior and usage trends, Sitecore provides dashboards and reports collected from internal and external data sources. Marketers can use these tools to monitor metrics such as page views, conversion rates, engagement value, and more. Sitecore can further use multivariate testing to allow marketers to compare usage analytics.
Sitecore’s Experience Analytics dashboard
As a content management framework, Drupal core has little of the marketing and analytics functionality that is built in to Sitecore. However, as an open source platform, Drupal integrates well with every major marketing automation platform, for example, Marketo and Pardot, empowering marketing teams to use the tools that work best for their business. Further, the Rules module can create deep, personalized user experiences without developer involvement, and the Google Analytics module brings highly functional analytics and reporting capabilities into Drupal.
For sites that require commerce capabilities, Sitecore integrates with a suite of applications called Sitecore Commerce, for an additional licensing fee.
With Drupal, one can use Drupal’s integration with Magento, a popular, best-of-breed open source solution that has backing and partnership from Acquia. Another option is to use the Drupal Commerce suite of modules (again, open source), which offer comparable e-commerce functionality of its own. Both the Drupal solutions and Sitecore Commerce can provide rich online commerce functionality, including inventory management, shopping cart, wish lists, tax and shipping management, and much more.
It will be covered more in Part 2 of this blog post, but it’s worth briefly mentioning here that customization is a feature that applies to many marketers. As an open platform, Drupal is built to be customized, as evidenced by its thousands of contributed modules, and close to 3000 for Drupal 8 alone. With a proprietary system, the source code is locked, inhibiting the ability to extensively customize.
In addition to evaluating how well a CMS’s features meet functional and nonfunctional requirements, decision makers need to evaluate the return on investment when making a CMS decision. Drupal and its contributed modules are free, whereas Sitecore has a licensing cost starting at $40,000 for the first year of use, plus another $8,000 per each additional year. The implementation cost starts at $65,000, and support and other licensing fees costs around $10,000 ongoing each year. Organizations considering Sitecore will need to calculate when they can expect a return on their licensing investment. Sitecore’s website cites some large organizations that have decided to make that investment, including Danone Nutritia, Dow Chemical, Uponor, and P&G.
Drupal is not without its costs either. Like Sitecore, it requires costs for implementation and hosting. A key difference, however, is that Drupal has a vast array of hosting options at virtually every price point. If a client chooses to use Acquia’s Lift for example, there is a subscription fee for that (contact Acquia for pricing). Lift provides a great user experience for content authors comparable to that of Sitecore, for example the drag-and-drop interface cited above. Lift’s features are well worth the licensing fee for many organizations.
Another long-held concern among decision-makers is that of the support and responsiveness of the people behind the software. In the early days of open source CMS’s, decision-makers were more likely willing to invest in proprietary solutions because should something go wrong, they had access to the software development team on the other end of the phone. Over the years at Mediacurrent, we have seen that’s not true with Drupal. Part of it is because, at the lower levels of the technology stack, there are a number of excellent Drupal hosting services who have a proven track record in hosting and servicing highly-scalable, highly-available Drupal solutions. At the higher levels of the stack, Drupal core and its commonly-used contributed modules are fully unit-tested, and any new security vulnerabilities are rapidly responded to by a dedicated security team. This combination of strengths has led to many Drupal success stories for a diverse array of enterprise clients such as The Weather Company, Travelport, MagMutual, and many more.
Stay tuned for Part 2, which will cover the IT perspective and the community perspective on the two CMSs. Part 2 will also provide concluding thoughts on which situations fit these technologies the best.
Additional Resources
10 Reasons Why Marketers are Moving to Drupal | Blog
Using Marketing Automation for Personalization: Benefits and Challenges | Blog
20 Things You Must Know Before Approaching a Web Agency | Blog