Drupal Planet

Subscribe to Drupal Planet feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 5 hours 45 min ago

Flocon de toile | Freelance Drupal: Using the Drupal 8 Cron API to generate image styles

January 5, 2017 - 6:08am

We saw in a previous post how we could automatically generate the image styles defined on a site for each uploaded source image. We will continue this post for this time to carry out the same operation using the Cron API of Drupal 8, which allows us to desynchronize these mass operations from actions carried out by users, and which can therefore penalize performances.

Categories: Blogs

Deeson: New version of Warden, open source site tracker and manager for Drupal

January 5, 2017 - 6:00am

We’re very pleased to announce a new beta release of our popular open source Warden software, developed in-house at Deeson and which reports and keeps track of multiple Drupal websites on different platforms. This version updates the MongoDB PHP driver driver version, but also fixes a number of number of other issues.

As well as this, we’re also planning our next release, which we’re looking at including JavaScript library versions (jQuery, Backbone, React.js etc) and server package versions (Apache, Nginx, Varnish, MySQL etc). This will help provide further information about the package versions being used by sites and servers you’re running, helping you understand where vulnerabilities could be and highlight libraries that need updating.

The Warden server software itself is written in Symfony and can be downloaded from GitHub. It works by providing a central dashboard which lists all the Drupal sites a developer might be working on, highlighting any that need issues, for example those that need updates. It’s composed of two parts - a module which needs to be installed on each of your websites and a central dashboard hosted on a web server.

Hosting companies like Acquia and Pantheon have their own reporting tools, but only if work if you only host websites on their platforms. If you have a number of websites running on multiple platforms, you need Warden to report on them all. Here is a guide to how Warden works.

Categories: Blogs

Agiledrop.com Blog: AGILEDROP: Drupal Blogs from December

January 5, 2017 - 4:08am
Last month we began with an overview of our blogs that were written in November. We promised that from now on, at the beginning of every month, you will be able to see, which Drupal blogs we have written for you over the past month. With that, you will be better informed. So, here's our December's work. Besides an overview of November's blogs from us and from other authors, we began our December's work with Drupal Camps in Middle America. There have been some complaints about the choice of the term Middle America, but we stand by our decision, which we also explained in the blog post.… READ MORE
Categories: Blogs

Jeff Geerling's Blog: Thoughts on the Acquia Certified Drupal 8 Site Builder Exam

January 4, 2017 - 6:01pm

Another year, another Acquia Certification exam... (wait—I think I've said that before).

The latest of the updated Acquia Certification Exams is the Acquia Certified Drupal 8 Site Builder. It's meant for the average Drupal site builder to test and evaluate familiarity with building websites using Drupal 8, and it's the same as all the previous exams in style: a series of 40 questions posed in the conversational manner, with answers you would provide if you were telling a project manager or site owner how you would implement a feature.

Categories: Blogs

John Svensson: Cron and Queues in Drupal 8

January 4, 2017 - 5:27pm

Cron is used to perform periodic actions. For example you would like to:

  • Send a weekly newsletter every Monday at 12:00 a.m.
  • Create a database backup once per day.
  • Publish or unpublish a scheduled node.
  • Send reminder emails to users to activate their accounts.

... or some other task(s) that has to be automated and run at specific intervals.

Cron in Drupal

Cron configuration can be found at Administration > Configuration > System > Cron

What tasks does Drupal perform when cron is run?

This depends entirely on what modules you have enabled and use of course, but here are some pretty usual examples on what tasks are run in cron:

  • Updating search indexes for your search engine when using Search core module.
  • Publishing or unpublishing nodes when using the Scheduler module.
  • If you have Update Manager module enabled, a task is run to look for updates. It also sends an email if you configured it to do so.
  • If you have dblog (Database logging) enabled this task deletes messages after a set limit.
  • Temporary uploaded files are deleted by the File module.
  • Fetch aggregated content when using Aggregator module.
Running cron

First we have the Automated Cron core module (sometimes referred as Poor man's cron) which during a page request checks when cron was last run and if it has been to long it processes the cron tasks as part of that requests.

Cron is set to run every third hours.

There are two things to consider when using this approach. If no one visits your website the cron doesn't run. Secondly, if the website is complex or the cron tasks are heavy the memory can exceed and slow down the page request.

The second approach is to actually setup a cron job that runs at the intervals you specify. Configuring this up depends on what system you use, but typically isn't that hard to do. If you use a shared host it's most likely you can do that right off in your control panel, and if you have your own server you can use the crontab command.

Read the Configuring cron jobs using the cron command on drupal.org for more details.

Implementing Cron tasks in Drupal

Cron tasks are defined by implementing the hook_cron hook in your module, just like in previous Drupal versions.

/** * Implements hook_cron(). */ function example_cron() { // Do something here. }

And that's pretty much it. Rebuild cache and next time cron runs your hook will be called and executed.

There are a couple of things we have to take in to consideration:

When did my Cron task run the last time?

One way to remember that is using State API which stores transient information, the documentation explains it as such:

It is specific to an individual environment. You will never want to deploy it between environments.
You can reset a system, losing all state. Its configuration remains.
So, use State API to store transient information, that is okay to lose after a reset. Think: CSRF tokens, tracking when something non-critical last happened …

With that in mind, we could do something like:

$last_run = \Drupal::state()->get('example.last_run', 0); // If 60 minutes passed since last time. if ((REQUEST_TIME - $last_run) > 3600) { // Do something. // Update last run. \Drupal::state()->set('example.last_run', REQUEST_TIME); }

To ensure our task is only run once per hour. Again though, if our Cron is set to run in a periodic longer than one hour it won't run every hour. (Who could have guessed that?) If you use Automatic cron and have no activity during some hours, the cron won't be run then as well.

How time consuming is my task?

Operations like deleting rows from a table in the database with timestamp as condition is pretty light task and can be executed directly in the hook_cron implementation. Like so:

// Example from the docs. $expires = \Drupal::state()->get('mymodule.last_check', 0); \Drupal::database()->delete('mymodule_table') ->condition('expires', $expires, '>=') ->execute(); \Drupal::state()->set('mymodule.last_check', REQUEST_TIME);

But if you have to run tasks that takes time, generating PDF, updating a lot of nodes, import aggregated content and such you should instead use something called QueueWorkers which lets you split up the work that needs to be done in to a queue that can later be processed over the course of later cron runs and prevents that a single cron eventually fails due to a time out.

QueueWorkers and Queues

So, we have a long-running task we want to process. As mentioned earlier we shouldn't just put all the processing into the hook as it can lead to timeouts and failures. Instead we want to split up the work into a queue and process them. The queues will later be processed in a later cron.

So let's pretend we've created a site where user can subscribe to things and when they do, they get an email sent with an attached PDF, for the sake of the example we'll also send emails to the admins that someone subscribed. Both sending emails and generating PDF are long running tasks especially if we are doing them at the same time, so let's add those items to an queue and let a queue worker process it instead.

To add a queue, we first get the queue and then add the item to it:

// Get queue. $queue = \Drupal::queue('example_queue'); // Add some fake data. $uid = 1; $subscriber_id = 2; $item = (object) ['uid' => $uid, 'subscriber_id' => $subscriber_id]; // Create item to queue. $queue->createItem($item);

So we get an queue object by a name, a name which is later used to identify which Queue Worker that should process it. And then we add an item to it by simply calling the createItem method.

Next we'll have to create a QueueWorker plugin. The QueueWorker is responsible for processing a given queue, a set of items.

Let's define a plugin with some pseudo long running task:

modules/custom/example_queue/src/Plugin/ExampleQueueWorker.php:

<?php /** * @file * Contains \Drupal\example_queue\Plugin\QueueWorker\ExampleQueueWorker. */ namespace Drupal\example_queue\Plugin\QueueWorker; use Drupal\Core\Queue\QueueWorkerBase; /** * Processes tasks for example module. * * @QueueWorker( * id = "example_queue", * title = @Translation("Example: Queue worker"), * cron = {"time" = 90} * ) */ class ExampleQueueWorker extends QueueWorkerBase { /** * {@inheritdoc} */ public function processItem($item) { $uid = $item->uid; $subscrition_id = $item->subscription_id; $user = \Drupal\user\Entity\User::load($uid); // Get some email service. $email_service = \Drupal::service('example.email'); // Generate PDF $subscriber_service = \Drupal::service('example.subscriber_pdf'); $pdf_attachment = $subscriber_service->buildPdf($subscriber_id, $user); // Do some stuff and send a mail. $emailService->prepareEmail($pdf_attachment); $emailService->send(); $emailService->notifyAdmins($subscriber_id, $user); } }

So let's break it down.

We use the Annotation to tell Drupal it's a QueueWorker plugin we created.

/** * Processes tasks for example module. * * @QueueWorker( * id = "example_queue", * title = @Translation("Example: Queue worker"), * cron = {"time" = 90} * ) */

The id argument is the most important since it must match the machine name of the queue we defined earlier.

The cron argument is optional and basically tells Drupal that when the cron is run it should spend maximum this time to process the queue, for this example we used 90 seconds.

Then we implement the public function processItem($item) { method which will pass the data we gave for each item when we created the queue.

In the pseudo example I'm loading the user uid we passed in to the queue item and then getting 2 services which one generates a PDF (pretty heavy operation) and the second one that supposedly later emails it. We then send emails to all the admins through the notifyAdmins method. So that was pretty simple. We simply create a new plugin class, use the Annotation to tell Drupal its a plugin and then implement the method which gets the data from where we added the item to the queue.

For this example we just added some operation to be processed in a queue that doesn't necessarily belong in the cron hook but instead when the user actually subscribed for something. So what I'm essentially saying here is that you don't need to create a queue in a cron hook, but can do that anywhere in your code.
In practise its the same thing, you get the queue $queue = \Drupal::queue('example_queue') and then add item to the queue $queue->createItem($data) and then define ourselves a QueueWorker which then processes the queue items when cron is run.

So the question we should ask ourselves here: Should we add individual tasks to a queue and let cron process it? And the answer - it depends. If the task slows down the request and keeps the user waiting, it's definitely something to consider. These things may be a better case for using something like a Background job, but you may not always be able to do that (and nothing that comes out of the box in Drupal) and if so a cron will take of some significant time from the request so it's not too slow for the user (..or timeouts for that matter).

Here's all the code without the pseudo code that you can use as boilerplate:

<?php /** * @file * Contains \Drupal\example_queue\Plugin\QueueWorker\ExampleQueueWorker. */ namespace Drupal\example_queue\Plugin\QueueWorker; use Drupal\Core\Queue\QueueWorkerBase; /** * Processes tasks for example module. * * @QueueWorker( * id = "example_queue", * title = @Translation("Example: Queue worker"), * cron = {"time" = 90} * ) */ class ExampleQueueWorker extends QueueWorkerBase { /** * {@inheritdoc} */ public function processItem($item) { } }

For a real example, take a look at the Aggregator module which uses Cron and QueueWorkers.

Categories: Blogs

Ben's SEO Blog: Drupal is Better for SEO than Adobe Experience Manager

January 4, 2017 - 2:35pm
Introduction

There are many choices out there for Web Content Management Systems (WCMS). Many commercial tools tout features and say that they compare favorably to Drupal, the leading open source solution for sophisticated WCMS. In this brief, I have researched to the best of my ability the features that make Drupal stand out from Adobe Experience Manager SEO.

I offer these caveats to my findings: 1) I have never personally used Adobe Experience Manager (AEM). That, in and of itself, is a drawback to the system. It is difficult to get a working demo of the platform. In Drupal, anyone can download and install it. 2) I have not talked to any Adobe representatives about these findings. Instead, I used Adobe’s public documentation to do my research. I also looked at third party how-to websites and a handful of published books. I have included links to publically available resources when available. 3) I am a Drupal SEO expert, not an Adobe Experience Manager expert so my viewpoint is skewed. However, I am a big fan of Adobe and have been since I started using Photoshop in 1993. I believe that these findings are fair, accurate, and truly reflect some of the advantages that Drupal brings to the table and the work still ahead for AEM.

Community Responsiveness

In Drupal, you get an open source community that has proved willing to do anything to make the product, site builder, marketer, and user experiences better. A very recent example of this is the release of Google AMP (https://www.ampproject.org/). Announced in late 2015, the Accelerated Mobile Pages (AMP) Project is “an open source initiative that embodies the vision that publishers can create mobile optimized content once and have it load instantly everywhere”. It’s a way to give a much faster content experience to mobile users. While still in its infancy, it’s a very promising technology that will benefit the entire web. Google has specifically stated that it will start giving preference (e.g. higher rankings and more traffic) to sites that use AMP. (http://www.wired.com/2016/02/google-will-now-favor-pages-use-fast-loading-tech/) The Drupal community released a beta AMP feature in late Feb, 2016. This allows almost any Drupal 7 or 8 site to serve AMP-powered pages. As of May 16, after extensive online searches, I can’t find any documentation of support in Adobe Experience Manager for Google AMP. (Note that Adobe Analytics is an active partner in AMP so tracking AMP pages does exist however there are no instructions or references on how to serve AMP in any documentation that I can currently find.)

Advanced Tools for Paths

In Adobe Experience Manager, vanity URLs do not support regex patterns. (https://docs.adobe.com/docs/en/aem/6-2/deploy/configuring/resource-mappi...) What this means is that you can’t do redirects like specify a pattern like "old-blog/*" and a target like "new-blog" and all pages under old-blog will be redirected to the page new-blog.

In Drupal, this functionality has been supported since September 2013 with the Match Redirect module (https://www.drupal.org/project/match_redirect). This greatly simplifies and reduces work, as anyone who has decided to change their site structure can attest.

Migration is Easier

Another path-related example that seems to have no corollary in Adobe Experience Manager is Pathologic. (https://www.drupal.org/project/pathologic) Pathologic is an “input filter” module. This means that it runs on your content and makes some adjustment or change to it before the server pushes it out to the visitor’s browser. In this case, Pathologic fixes broken links in situations when URLs have changed. For example, if you move to a new domain name or your site structure changes, say, you moved your Drupal installation from one directory to another. Well, normally, that breaks hundreds or thousands of links, images, and references. Pathologic cleanly fixes this problem easily. Another great example is that relative links or embedded images that use relative URLs don’t work in RSS feeds. Pathologic fixes that quickly and easily.

Automatic Fixes to Rewritten URLs

Yet another path-related issue is that Adobe Experience Manager doesn’t seem to offer any type of automatic fixes if a path is changed. In Drupal, this is handled by the Redirect module. So, for example, you write a Product page on your site called “Keywords and Me” which lives at www.myurl.com/keywords-and-me. But then, after a year, you realize that to SEO that page, you need to change it to “Key Phrases and Me” and change the URL to “www.myurl.com/key-phrases-and-me”. Drupal, when properly configured, would automatically create a 301 redirect from the old URL to the new URL. This would preserve much of the value of your incoming links, provide a better user experience, and update Google that the old content lives at a new address.

Duplicate Content

In fact, the whole vanity URL system in Adobe Experience Manager has a tendency to create duplicate content. That is, content that lives on two different URLs. (See the video: https://helpx.adobe.com/experience-manager/kb/vanity-urls.html). The video demonstrates how content lives at one URL but the vanity URL also shows the exact same content. This would require, at the minimum, a canonical tag to let Google know where the content actually lives. Either Adobe expects users of their system to already know this, it was overlooked, or it’s not possible.

Preventing Common Errors

The page (https://helpx.adobe.com/experience-manager/kb/vanity-urls.html) describes a situation where the Adobe Experience Manager admin accidentally defined two different pages to resolve on a single URL “dealoftheday”. There is a complicated set of steps and a separate “Sling Resource Resolver” tool required to identify the problem and address the issue. In Drupal, this simply cannot happen as a warning would occur that allows the content creator to fix this glaring problem when the content is created. In fact, these kinds of user-friendly warnings that prevent common errors are used throughout the Drupal admin interface to help prevent the inevitable cruft that builds up in long-standing web projects.

Simplicity for Marketers

Drupal has taken great strides in making it as easy as possible for you to SEO your website. There are many tools to accomplish just about anything you can imagine. And, most of them use the same admin interface embedded in Drupal that is familiar. There is extensive documentation as well as embedded help text.

After spending a few hours reading Adobe’s Experience Manager documentation, I’ve concluded that SEO on Adobe Experience Manager is a complex undertaking. Take a look at https://docs.adobe.com/docs/en/aem/6-2/manage/seo-and-url-management.html. An example sentence:

The SCR annotation for this type of servlet would look something like this: @SlingServlet(resourceTypes = "myBrand/components/pages/myPageType", selectors = "myRenderer", extensions = "json”, methods=”GET”) In this case, the resource that the URL addresses, an instance of the myPageType resource, is accessible in the servlet automatically. To access it, you call: Resource myPage = req.getResource();

That's not exactly marketing-friendly.

Here’s another one:

The SlingResourceResolver can be found at /system/console/config on any AEM instance and it is recommended that you build out the mappings that are needed to shorten URLs as regular expressions and include these configurations under a config.publish OsgiConfig node that is included in your build. Rather than doing your mappings in /etc/map, they can be assigned directly to the resource.resolver.mapping property:

resource.resolver.mapping="[/content/my-brand/(.*)$&lt;/$1]"

This isn’t something that any marketer can or should have to understand. I understand that there are likely developers involved in a web project that understand this kind of tech-speak. However, the more I rely on a technician to fix my SEO issues, the less control I have and the longer (generally) that it takes to make the changes I need.

The Power of Taxonomy

Drupal has a system called Taxonomy that is an SEO’s dream. When a content creator tags content, Drupal automatically creates a page for that tag that is well optimized for search engines. That tag page is automatically added to the XML Sitemap so it shows up in Google quickly. And, when additional content is tagged with the same tag, the tag page is automatically updated. Of course tag pages can be even further optimized similarly to how node pages are optimized - with additional text and meta data. Adobe Experience Manager does support tagging but it seems to be limited to showing “tag clouds”. It does not support custom tag pages and the site admin must “Select the page to be referenced” by each tag - a time-consuming process that is not updated dynamically. (See https://docs.adobe.com/docs/en/aem/6-2/author/page-authoring/default-com... #Tag Cloud )

The Development Effort Required for Great SEO

Adobe Experience Manager requires significantly more effort in configuration than Drupal to achieve optimal SEO. This means that there is more to maintain, more things that could go wrong, and more reliance on the time and technical expertise of your development team. Examples that can be taken from https://docs.adobe.com/docs/en/aem/6-2/manage/seo-and-url-management.html include Canonical URLs, Case Sensitivity, and XML Sitemap. I’m sure there are others.

Conclusion

It’s clear that there are many distinct SEO advantages to using Drupal for your website. While this review is not conclusive, and it is almost certain that Adobe will continue to improve their product and make it a better tool and easier for marketers, as it stands today, Drupal is the clear winner. Even if Adobe does soon bring parity for SEO, the ongoing competitive advantage that Drupal has is the one addressed first: that of Community. The Drupal community is many, many times larger than the Adobe development team. That means that there is and always will be more resources going into making Drupal the best that it can be. Drupal is backed by Acquia, who provides a level of support on par with Adobe and many more advanced personalization tools that make Adobe Experience Manager so attractive. In my opinion, Drupal wins and will keep winning.

A comparison of major SEO functionality between the two platforms clearly shows Drupal winsdrupal seo, adobe experience manager, seo, Planet Drupal
Categories: Blogs

Acquia Developer Center Blog: What is DevOps?

January 4, 2017 - 12:06pm

DevOps is a much used term, but it seems like everyone you talk to has a different definition of it. Here's my own interpretation.

Tags: acquia drupal planet
Categories: Blogs

Deeson: Thinking fast and slow in digital

January 4, 2017 - 11:36am

Successfully delivering a digital project starts by understanding how we, as humans, think. The better we get at identifying what might mislead us and learning to focus on what is important, the more chances we have for delivering a successful project. We need to cultivate our ability to think slow in order to deliver something that will enable others to act fast.

In the book “Thinking, Fast and Slow”, psychologist Daniel Kahneman provides a conceptual framework to help us better understand how we think. The basic tenet is that we have two systems that influence us and they are in constant competition.

System 1, as Kahneman calls it, is fast, instinctive and emotional. System 2 is more deliberative, rational and effortful. He goes on to describe a number of experiments that demonstrate how we often allow System 1 to lead us down the wrong path when faced with complex and unusual situations as we attempt to anchor them in what we know and relate them to patterns we’ve seen in the past.

Digital projects, it could be argued, are almost perfectly designed to set these traps for our minds and lead us down the wrong path. Here are some examples.

Framing

What on the surface can appear as a simple request such as “our organisation needs a website”, once unpacked, opens up a significant number of questions that go well beyond what people intuitively consider the domain of digital design. Framing, as psychologists call it, significantly influences our subsequent choices. If we frame the problem of creating a website in terms of “what CMS will allow us to create pages quickly” as opposed to “what reactions do we want to generate from users”, we will go down a very different (and not very useful) path.  

Substitution problem

The relatively young age of the discipline of digital design means there is an unusually high level of noise within the information available. It is easy to be misled as to the actual complexity of the task and become overly concerned with a specific aspect (which CMS, which technology, which colours, etc). Breaking through the noise to get to what is really useful challenges the very core of how we as humans are equipped to think. In particular Kahneman describes, through what he calls the substitution problem, how we are naturally inclined to substitute complex questions (e.g. “What is the most important message our organisation should get across?”) with apparently simpler, but not as useful, questions (e.g. “What colour should the page footer be in?”).

The Planning fallacy

The actual realisation of a project requires a variety of different disciplines to complete. Digital strategy, design, user experience, analytics and software engineering all come into play. Each brings its own terminology and set of norms and you are taken on a whirlwind tour of the possibilities and called to sign-off in time for the project to keep pace and stay within budget. Unfortunately, as Kahneman explains, humans are also inflicted by a planning fallacy. We tend to consistently underestimate the time required to complete a task. At the same time optimism bias means that we overestimate the benefits.

Think slow to allow others to act fast

What are we to do then? Understanding both how we think and how people that interact with our website think is central to delivering a successful digital project. We improve our chances of success by recognizing that designing, planning and delivering the project is going to take deliberative, rational and effortful thought.

As we have seen, we have to avoid a number of pitfalls and stereotypes and challenge what we think we know in order to create something that achieves our goals. At the same time, the project we deliver needs to engage with people in a manner that is fast, instinctive and emotional. We cannot expect visitors to engage with our site using their slow and rational systems.

At Deeson we have built a project delivery process that helps us avoid or minimise the risks of these pitfalls. During our Discovery phase we hold a one-day Blueprint workshop that directly aims to address the framing issue by getting all stakeholders in a single room and guiding them through a discovery process, with the aim of defining the problem we are trying to solve.

Ongoing internal debates and unlimited training budgets for all employees means that we are not afraid to explore the hard questions and avoid the substitution problem. Finally, empowered and multi-disciplinary teams working closely with clients through an agile methodology mean that the state of the project is kept in check and planning is focused on realistic deliverables.

In short, we need to engage our deliberative and rational side in order to produce systems that are beautiful, intuitive and emotional. Deeson’s approach is designed to achieve just that.

 

Categories: Blogs

lakshminp.com: Composer autoloading and Drupal 8

January 4, 2017 - 10:55am

Ever wondered what exists inside the vendor/ directory of your Drupal or PHP codebase? Let's dive down the rabbit hole and see.

A little bit of history

Let's digress into a little history lesson to see why things are they way they are in the PHP autoloading world.

Categories: Blogs

Mediacurrent: Creating and Updating Comments with Drupal’s REST Services and Javascript

January 4, 2017 - 10:48am

I recently had a need to allow users to create a single comment on two node types, of which only the author could access.  Part of the decision for limiting users to one comment per node was to keep the database size down.  We only wanted one comment to be made per node, and future comments made as edits to the original.

Categories: Blogs

Mediacurrent: Fight Bad Intranets with Killer User Personas

January 4, 2017 - 9:57am

Intranets get a bad reputation. Probably because there are so many outdated and confusing corporate intranets suffering from content sprawl.

Content sprawl happens when there isn’t a content management plan for how to remove outdated content, how to curate content, and/or a process for how to ensure new content gets added to the right spot. Not having a plan might not be so bad for the first couple months after an intranet launches, but fast forward a little bit and you can have a tangled mess of content spaghetti.

Categories: Blogs

Drupal Association News: 2017 Community Board Elections Begins 1 February

January 4, 2017 - 9:11am

Now that Drupal 8 is a year old, it is an exciting time to be on the Drupal Association Board. With Drupal always evolving, the Association must evolve with it so we can continue providing the right kind of support. And, it is the Drupal Association Board who develops the Association’s strategic direction by engaging in discussions around a number of strategic topics throughout their term. As a community member, you can be part of this important process by becoming an At-large Board Member.

We have two At-large positions on the Association Board of Directors. These positions are self-nominated and then elected by the community. Simply put, the At-large Director position is designed to ensure there is community representation on the Drupal Association Board. If you are interested in helping shape the future of the Drupal Association, we encourage you to read this post and nominate yourself between 1 February and 19 February 2017.

How do nominations and elections work?
Specifics of the election mechanics were decided through a community-based process in 2012 with participation by dozens of Drupal community members. More details can be found in the proposal that was approved by the Drupal Association Board in 2012 and adapted for use this year.

What does the Drupal Association Board do?
The Board of Directors of the Drupal Association are responsible for financial oversight and setting the strategic direction for serving the Drupal Association’s mission, which we achieve through Drupal.org and DrupalCon. Our mission is: Drupal powers the best of the Web.  The Drupal Association unites a global open source community to build and promote Drupal.

New board members will contribute to the strategic direction of the Drupal Association. Board members are advised of, but not responsible for matters related to the day-to-day operations of the Drupal Association, including program execution, staffing, etc.

Directors are expected to contribute around five hours per month and attend three in-person meetings per year (financial assistance is available if required).

Association board members, like all board members for US-based organizations, have three legal obligations: duty of care, duty of loyalty, and duty of obedience. In addition to these legal obligations, there is a lot of practical work that the board undertakes. These generally fall under the fiduciary responsibilities and include:

  • Overseeing Financial Performance
  • Setting Strategy
  • Setting and Reviewing Legal Policies
  • Fundraising
  • Managing the Executive Director

To accomplish all this, the board comes together three times a year during two-day retreats. These usually coincide with the North American and European DrupalCons as well as one February meeting. As a board member, you should expect to spend a minimum of five hours a month on board activities.

Some of the topics that will be discussed over the next year or two are:

  • Strengthening Drupal Association’s sustainability
  • Understanding what the Project needs to move forward and determine how the Association can help meet those needs through Drupal.org and DrupalCon
  • Growing Drupal adoption through our own channels and partner channels
  • Developing the strategic direction for DrupalCon and Drupal.org
  • And more!

Please watch this video to learn more.

Who can run?
There are no restrictions on who can run, and only self-nominations are accepted.

Before self-nominating, we want candidates to understand what is expected of board members and what types of topics they will discuss during their term. That is why we now require candidates to:

What will I need to do during the elections?
During the elections, members of the Drupal community will ask questions of candidates. You can post comments on candidate profiles here on assoc.drupal.org and to the public Drupal Association group at http://groups.drupal.org/drupal-association.

In the past, we held group “meet the candidate” interviews. With 22 candidates last year, group videos didn’t allow each candidate to properly express themselves. This year, we will replace the group interview and allow candidates to create their own 3 minute video and add it to their candidate profile page. These videos must be posted by 20 February, the Association will promote the videos to the community from 20 February through 4 March, 2017.

How do I run?
From 1 - 19 February, go here to nominate yourself.  If you are considering running, please read the entirety of this post, and then be prepared to complete the self-nomination form. This form will be open on 1 February, 2017 through 19 February, 2017 at midnight UTC. You'll be asked for some information about yourself and your interest in the Drupal Association Board. When the nominations close, your candidate profile will be published and available for Drupal community members to browse. Comments will be enabled, so please monitor your candidate profile so you can respond to questions from community members.

Reminder, you must review the materials listed above before completing your candidate profile:

Who can vote?
Voting is open to all individuals who have a Drupal.org account by the time nominations open and who have logged in at least once in the past year. If you meet this criteria, your account will be added to the voters list on association.drupal.org and you will have access to the voting.
To vote, you will rank candidates in order of your preference (1st, 2nd, 3rd, etc.). The results will be calculated using an "instant runoff" method. For an accessible explanation of how instant runoff vote tabulation works, see videos linked in this discussion.

Elections process
Voting will be held from 6 March, 2017 through 18 March, 2017. During this period, you can review and comment on candidate profiles on assoc.drupal.org and engage all candidates through posting to the Drupal Association group. Have questions? Please contact Drupal Association Executive Director, Megan Sanicki. Many thanks to nedjo for pioneering this process and documenting it so well!
Flickr photo: Clyde Robinson

Categories: Blogs

XIO Blog: Combining Drupal 7 and 8: a new public site in Drupal 8, maintaining a separate login site for customers or members in Drupal 7

January 4, 2017 - 7:15am
Often websites are upgraded to Drupal 8 because the time has come for a new, fresh look in order to generate business and attract new customers or members. This does not affect the separate member or customer section per se. There the emphasis tends to be on integrated tools for providing support to the users and these tools do not need to be upgraded that often.  
Categories: Blogs

Third & Grove: TPG Capital Drupal Case Study

January 4, 2017 - 4:00am
TPG Capital Drupal Case Study antonella Wed, 01/04/2017 - 03:00
Categories: Blogs

Acquia Lightning Blog: Using the Workspace Preview System

January 3, 2017 - 5:50pm
Using the Workspace Preview System Adam Balsam Tue, 01/03/2017 - 16:50

Back in October of 2016, we launched an experimental version of Lightning's Workspace Preview System. While it is still marked as experimental, great progress has been made and we wanted to share some more details about how one might use WPS.

Categories: Blogs

DrupalCon News: How are you building Drupal websites?

January 3, 2017 - 4:25pm

Drupal's site configuration interface and contributed modules have evolved greatly over the years. We want to hear all about your experience with helpful tools and successful techniques.

Do you or your organization have recommendations for a successful site build? Are you harnessing an awesome new contrib module that everyone should hear about? Did you find a great technique for creating incredible Drupal websites through the administrative interface?  Do you like crab cakes or Baltimore beer?  Me too!  

We want to hear about it all!  

Categories: Blogs

OSTraining: How to Update Drupal 8 Sites

January 3, 2017 - 3:24pm

Throughout the life of your Drupal site, you'll have to perform updates. New features, bug fixes and security patches will be released for Drupal itself, plus modules and themes. This process is essential to maintain a healthy Drupal site.

We're going to take you through the process of updating your Drupal sites. Watch the following 5 videos below, and you'll see to update Drupal 8.

Categories: Blogs

Drupal Console: Drupal Console RC-13 is out

January 3, 2017 - 1:10pm
Latest DrupalConsole RC-13 is out including several changes and fixes. This is a summary of the most notable updates.
Categories: Blogs

Lullabot: HTTPS Everywhere: Quick Start With Cloudflare

January 3, 2017 - 1:00pm

This is a continuation of a series of articles about HTTPS, continuing from HTTPS Everywhere: Security is Not Just for Banks. If you own a website and understand the importance of serving sites over HTTPS, the next task is to figure out how to migrate a HTTP website to HTTPS. In this article, I’ll walk through an easy and inexpensive option for migrating your site to HTTPS, especially if you have little or no control over your website server or don't know much about managing HTTPS.

A Github Pages Site

I started with the simplest possible example. I have a website hosted by a free, shared hosting service, Github Pages, that doesn’t directly provide SSL for custom domains, I have no shell access to the server, and I just wanted to get my site switched to HTTPS as easily and inexpensively as possible. I used an example from the Cloudflare blog about how to use Cloudflare SSL for a Github Pages site.

Services like Cloudflare can provide HTTPS for any site, no matter where it is hosted. Cloudflare is a Content Delivery Network (CDN) that stands in front of your web site to catch traffic before it gets to your origin website server. A CDN provides caching and efficient delivery of resources, but Cloudflare also provides SSL certificates, and they have a free account option to add any domain to a existing SSL certificate for no charge. With this alternative there is no need to purchase an individual certificate, nor figure out how to get it uploaded and signed. Everything is managed by Cloudflare. The downside of this option is that the certificate will be shared with numerous other unrelated domains. Cloudflare has higher tier accounts that have more options for the SSL certificates, if that’s important. But the free option is an easy and inexpensive way to get basic HTTPS on any site.

It’s important to note that adding another server to your architecture means that content makes another hop between servers. Now, instead of content going directly from your origin website server to the user, it goes from the the origin website server to Cloudflare to the user. The default Cloudflare SSL configuration will encrypt traffic between end users and the Cloudflare server (front-end traffic), but not between Cloudflare and your origin website server (back-end traffic). They point out in their documentation that back-end traffic is much harder to intercept, so that might be an acceptable risk for some sites. But for true security you want back-end traffic encrypted as well. If your origin website server has any kind of SSL certificate on it, even a self-signed certificate, and is configured to manage HTTPS traffic, Cloudflare can encrypt the back-end traffic as well with a “Full SSL” option. If the web server has an SSL certificate that is valid for your specific domain, Cloudflare can provide even better security with the “Full SSL (strict)” option. Cloudflare also can provide you with a SSL certificate that you can manually add to your origin server to support Full SSL, if you need that.

The following screenshot illustrates the Cloudflare security options.

undefined Step 1. Add a new site to Cloudflare

I went to Cloudflare, clicked the button to add a site, typed in the domain name, and waited for Cloudflare to scan for the DNS information (that took a few minutes). Eventually a green button appeared that said ‘Continue Setup’.

undefined Step 2. Review DNS records

Next, Cloudflare displayed all the existing DNS records for my domain.

Network Solutions is my registrar (the place where I bought and manage my domain). Network Solutions was also my DNS provider (nameserver) where I set up the DNS records that indicate which IP addresses and aliases to use for my domain. Network Solutions will continue to be my registrar, but this switch will make Cloudflare my DNS provider, and I’ll manage my DNS records on Cloudflare after this change.

I opened up the domain management screen on Network Solutions and confirmed that the DNS information Cloudflare had discovered was a match for the information in my original DNS management screen. I will be able to add and delete DNS records in Cloudflare from this point forward, but for purposes of making the switch to Cloudflare I initially left everything alone.

undefined Step 3. Move the DNS to Cloudflare

Next, Cloudflare prompted me to choose a plan for this site. I chose the free plan option. I can change that later if I need to. Then I got a screen telling me to switch nameservers in my original DNS provider.

undefined

On my registrar, Network Solutions, I had to go through a couple of screens, opting to Change where domain points, then Domain Name Server, point domain to another hosting provider. That finally got me to a screen where I could input the new nameservers for my domain name.

undefined

Back on Cloudflare, I saw a screen like the following, telling me that the change was in progress. There was nothing to do for a while, I just needed to allow the change to propagate across the internet. The Cloudflare documentation assured me that the change should be seamless to end users, and that seemed logical since nothing had really changed so far except the switch in nameservers.

undefined

Several hours later, once the status changed from Pending to Active, I was able to continue the setup. I was ready to configure the SSL security level. There were three possible levels. The Flexible level was the default. That encrypts traffic between my users and Cloudflare, but not between Cloudflare and my site’s server. Further security is only possible if there is an SSL certificate on the origin web site server as well as on Cloudflare. Github pages has a SSL certificate on the server, since they provide HTTPS for non-custom domains. I selected the Crypto tab in Cloudflare to choose the SSL security level I wanted and changed the security level to Full.

undefined Step 4. Confirm that HTTPS is Working Correctly

What I had accomplished at this point was to make it possible to access my site using HTTPS with the original HTTP addresses still working as before.

Next, it was time to check that HTTPS was working correctly. I visited the production site, and manually changed the address in my browser from HTTP://www.example.com to HTTPS://www.example.com. I checked the following things:

  • I confirmed there was a green lock displayed by the browser.
  • I clicked the green lock to view the security certificate details (see my previous article for a screenshot of what the certificate looks like), and confirmed it was displaying a security certificate from Cloudflare, and that it included my site’s name in its list of domains.
  • I checked the JavaScript console to be sure no mixed content errors were showing up. Mixed content occurs when you are still linking to HTTP resources on an HTTPS page, since that invalidates your security. I’ll discuss in more detail how to review a site for mixed content and other validation errors in the next article in this series.
Step 5. Set up Automatic Redirection to HTTPS

Once I was sure the HTTPS version of my site was working correctly, I could set up Cloudflare to handle automatic redirection to HTTPS, so my end users would automatically go to HTTPS instead of HTTP.

Cloudflare controls this with something it calls “Page Rules,” which are basically the kinds of logic you might ordinarily add to an .htaccess file. I selected the “Page Rules” tab and created a page rule that any HTTP address for this domain should always be switched to HTTPS.

undefined

Since I also want to standardize on www.example.com instead of example.com, I added another page rule to redirect traffic from HTTPS://example.com to HTTPS://www.example.com using a 301 redirect.

undefined

Finally, I tested the site again to be sure that any attempt to access HTTP redirected to HTTPS, and that attempts to access the bare domain redirected to the www sub-domain.

A Drupal Site Hosted on Pantheon

I also have several Drupal sites that are hosted on Pantheon and wanted to switch them to HTTPS, as well. Pantheon has instructions for installing individual SSL certificates for Professional accounts and above, but they also suggest an option of using the free Cloudflare account for any Pantheon account, including Personal accounts. Since most of my Pantheon accounts are small Personal accounts, I decided to set them up on Cloudflare as well.

The setup on Cloudflare for my Pantheon sites was basically the same as the setup for my Github Pages site. The only real difference was that the Pantheon documentation noted that I could make changes to settings.php that would do the same things that were addressed by Cloudflare’s page rules. Changes made in the Drupal settings.php file would work not just for traffic that hits Cloudflare, but also for traffic that happens to hit the origin server directly. Pantheon’s documentation notes that you don’t need to provide both Cloudflare page rules and Drupal settings.php configuration for redirects. You probably want to settle on one or the other to reduce future confusion. However, either, or both, will work.

These settings.php changes might also be adapted for Drupal sites not hosted on Pantheon, so I am copying them below.

// From https://pantheon.io/docs/guides/cloudflare-enable-https/#drupal // Set the $base_url parameter to HTTPS: if (defined('PANTHEON_ENVIRONMENT')) { if (PANTHEON_ENVIRONMENT == 'live') { $domain = 'www.example.com'; } else { // Fallback value for development environments. $domain = $_SERVER['HTTP_HOST']; } # This global variable determines the base for all URLs in Drupal. $base_url = 'https://'. $domain; } // From https://pantheon.io/docs/redirects/#require-https-and-standardize-domain //Redirect all traffic to HTTPS and WWW on live site: if (isset($_SERVER['PANTHEON_ENVIRONMENT']) && ($_SERVER['PANTHEON_ENVIRONMENT'] === 'live') && (php_sapi_name() != "cli")) { if ($_SERVER['HTTP_HOST'] != 'www.example.com' || !isset($_SERVER['HTTP_X_SSL']) || $_SERVER['HTTP_X_SSL'] != 'ON' ) { header('HTTP/1.0 301 Moved Permanently'); header('Location: https://www.example.com'. $_SERVER['REQUEST_URI']); exit(); } }

There was one final change I needed to make to my Pantheon sites that may or may not be necessary for other situations. My existing sites were configured with A records for the bare domain. That configuration uses Pantheon’s internal system for redirecting traffic from the bare domain to the www domain. But that redirection won’t work under SSL. Ordinarily you can’t use a CNAME record for the bare domain, but Cloudflare uses CNAME flattening to support a CNAME record for the bare domain. So once I switched DNS management to Cloudflare’s DNS service, I went to the DNS tab, deleted the original A record for the bare domain and replaced it with a CNAME record, then confirmed that the HTTPS bare domain properly redirected to the HTTPS www sub-domain.

undefined Next, A Deep Dive

Now that I have basic SSL working on a few sites, it’s time to dig in and try to get a better understanding about HTTPS/SSL terminology and options and see what else I can do to secure my web sites. I’ll address that in my next article, HTTPS Everywhere: Deep Dive Into Making the Switch.

Categories: Blogs

Third & Grove: Mint.com Drupal Case Study

January 3, 2017 - 12:15pm
Mint.com Drupal Case Study antonella Tue, 01/03/2017 - 11:15
Categories: Blogs

Pages