Drupal Planet

Subscribe to Drupal Planet feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 3 hours 12 min ago

OSTraining: Using the Drupal Theme Developer Module

June 8, 2017 - 7:49am

There is one module that makes designing for Drupal 7 much, much easier: Theme Developer.

You can think of Theme Developer as a Drupal-specific version of Firebug or Chrome Developer Tools. Using Theme developer you can click on any element of your Drupal site and get a breakdown of how it was built.

Theme Developer has some downsides: it's not been updated in a while, and (like anything related to the Devel module) shouldn't be used on live sites. But, it can still be a useful tool for Drupal 7 themers.

  • In the bottom-left corner of the screen, you will see a small "Themer Info" area:

  • Check this box:
  • Up in the top-right corner of the site you'll see a larger black box:

  • The bar does a pretty good job of explaining what to do! Just like Firebug, or Chrome Dev Tools, you can inspect areas of your Drupal site.
  • Here's what happens when you click on a page element: you'll see a red box around that particular element.
  • The theme developer box will now show information about your chosen page element:

Here are some of the details you'll see:

  • Template called: the name of the file which is controlling the layout of this element
  • File used: the location of the file controlling the layout
  • Candidate template files: if you'd like to create an override for this part of the page, these are suggested file names.
  • Preprocess functions: These functions connect what happens in the module code to what gets sent to the theme

If you want to use the candidate template files, easiest thing to do is copy the "Template called" file, rename it and save it in your template folder. This is what the files mentioned in this example would do:

  • block-user-1.tpl.php ... if you create this file, it will only provide a template for this particular block
  • block-user.tpl.php ... if you create this file, it will only provide a template for this user blocks
  • block-left.tpl.php ... if you create this file, it will only provide a template for blocks in the left div.
  • block.tpl.php ...if you create this file, it will provide a template for all blocks

This video offers a great explanation of the Theme Developer module:

{wistia}d1i8fayk3x{/wistia}

Categories: Blogs

Agaric Collective: Catch some Agarics at June conferences

June 7, 2017 - 9:32pm

Agaric is grateful to the Drupal community for all the effort poured into the amazing collaborative project. As part of giving back to it, we go to conferences to share with others what we have learned. These are some events where Agaric will be presenting this month.

Eastern Conference for Workplace Democracy

This is a convergence of worker-owned cooperatives. Representatives come from all over the country to attend workshops and sessions on all things related to owning a cooperative. It will be help in New York City on weekend of June 9th -11th at the John Jay College of Criminal Justice.

Benjamin and Micky will be hosting a workshop/discussion with Danny Spitzberg on Drutopia. They will cover how it can help cooperatives and smaller businesses have a we presence above and beyond the costs they can afford by consolidating the hosting and feature development into a group effort.

Montreal Drupal Camp

This event will take place on June 15-18 at John Molson School of Business de l'Université Concordia. Benjamin will be speaking on how Software as a Service can lead to long-term success in a software project.

Twin Cities Drupal Camp in Minneapolis

At Twin Cities Agaric will be presenting one workshop and two sessions.

On Thursday, June 22, Benjamin and Mauricio will be presenting the Getting Started with Drupal workshop. It is aimed to people who are just starting with Drupal and want to have a birds eye view of how the system works. As part of the workshop attendees will have the chance to create a simple yet functional website to put in practice their new knowledge. The organizers have gone above and beyond to make this training FREE for everyone! You do not even need a camp ticket to participate. You just need to register.

On Saturday, June 24, Mauricio will be presenting on Drupal 8 Twig recipes. This will be an overview of the theme system in Drupal 8 and will include practical example of modifying the default markup to your needs. The same day, Benjamin will present his Software as a Service.

Design4Drupal

This is THE yearly Camp for Drupal doers in Boston and it happens June 22nd-23rd. Micky will be hosting a workshop/discussion on Drutopia, an initiative within the Drupal project based in social justice values and focused on building collectively owned online tools. Current focuses include two Drupal distributions aimed at grassroots groups also offered as software as a service, ensuring that the latest technology is accessible to low-resourced communities.
Agaric will have a busy month attending and speaking at conferences. Please come to say hi and have fun with us.

Categories: Blogs

Freelock : Fixing Drupal 8.3 upgrade issues - TwigExtension, Layouts, and Tweaks

June 7, 2017 - 9:00pm

Lots of stuff has been changing in Drupal 8 recently. In 8.3.0, a new experimental "layout discovery" module was added to core, which conflicted with the contrib "layout plugin" module. Now in 8.3.3, the two-column and three-column layouts had their region names changed, which hid any content dropped into those regions when those layouts were used.

In the past week, we've seen a couple issues upgrading a site from 8.2.x to 8.3.2, and now another issue with 8.3.2 to 8.3.3 that seem worth a quick mention.

DrupalDrupal 8Drupal PlanetUpdatesVisual Regression Testing
Categories: Blogs

Lullabot: The Ten Commandments of a New Drupal 8 Site for Enterprise Developers

June 7, 2017 - 5:50pm

Over the past two years, I’ve had the opportunity to work with many different clients on their Drupal 8 site builds. Each of these clients had a large development team with significant amounts of custom code. After a recent launch, I went back and pulled together the common recommendations we made. Here they are!

1. Try to use fewer repositories and projects

With the advent of Composer for Drupal site building, it feels natural to have many small, individual repositories for each custom module and theme. It has the advantage of feeling familiar to the contrib workflow for Drupal modules, but there are significant costs to this model that only become obvious as code complexity grows.

The first cost is that at best, every bit of work requires two pull requests; one pull request in a custom module repository, and a second commit in the composer.lock in the site repository. It’s easy to forget about that second pull request, and in our case, it led to constant questioning by the QA team to see if a given ticket was ready to test or not.

A second cost is dealing with cross-repository dependencies. For example, in site implementations, it’s really common to do some work in a custom module and then to theme that work in a custom theme. Even if there’s only a master branch, there would still be three pull requests for this work—and they all have to be merged in the right order. With a single repository, you have a choice. A single pull request can be reviewed and merged, or multiple can be filed.

A third, and truly insidious cost is where separate repositories actually become co-dependent, and no one knows it. This can happen when modules are only tested in the context of a single site and database, and not as site-independent reusable modules. Is your QA team testing each project against a stock Drupal install as well as within your site? Are they mixing and matching different tags from each repository when testing? If not, it’s better to just have a single site repository.

2. Start with fewer branches, and add more as needed

Sometimes, it feels good to start a new project by creating all of the environment-specific branches you know you’ll need; develop, qa, staging, master, and so on. It’s important to ask yourself; is each branch being used? Do we have environments for all of these branches? If not, it’s totally OK to start with a single master branch. If you do have multiple git repositories, ask this question for each repository independently. Perhaps your site repository has several branches, while the new SSO module that you’re building for multiple sites sticks with just a master branch. Branches should have meaning. If they don’t, then they just confuse developers, QA, and stakeholders, leading to deployment mistakes. Delete them.

3. Avoid parallel projects

Once you do have multiple branches, it’s really important to ensure that branches are eventually merged “upstream.” With Composer, it’s possible to have different composer.json files in each branch, such as qa pointing to the develop in each custom module, and staging pointing to master. This causes all sorts of confusion because it effectively means you have two different software products—what QA and developers use, and what site users see. It also means that changes in the project scaffolding have to be done once in each branch. If you forget to do that, it’s nothing but pain trying to figure it out! Instead, use environment branches to represent the state of another branch at a given time, and then tag those branches for production releases. That way, you know that tag 1.3.2 is identical to some build on your develop branch (even if the hash isn’t identical due to merge commits).

4. Treat merge conflicts as an opportunity

I’ve heard from multiple developers that the real reason for individual repositories for custom modules is to “reduce merge conflicts.” Let’s think about the effect multiple repositories have on a typical Drupal site.

I like to think about merge conflicts in three types. First, there’s the traditional merge conflict, such as when git refuses to merge a branch automatically. Two lines of code have been changed independently, and a developer needs to resolve them. Second, there are logical merge conflicts. These don’t cause a merge conflict that version control can detect but do represent a conflict in code. For example, two developers might add the same method name to a class but in different text locations in the class. Git will happily merge these together, but the result is invalid PHP code. Finally, there are functional merge conflicts. This is where the PHP code is valid, but there is a regression or unexpected ~~behaviour~~ behavior in related code.

Split repositories don’t have much of an effect on traditional merge conflicts. I’ve found that split repositories make logical conflicts a little harder to manage. Typically, this happens when a base class or array is modified and the developer misses all of the places to update code. However, split repositories make functional conflicts drastically more difficult to handle. Since developers are working in individual repositories, they may not always realize that they are working at cross-purposes. And, when there are dependencies between projects, it requires careful merging to make sure everything is merged in the right order.

If developers are working in the same repository, and discover a merge conflict, it’s not a blocker. It’s a chance to make a friend! By discussing the conflict, it gives developers the chance to make sure they are solving the right problem, the right way. If conflicts are really complex, it’s an opportunity to either refactor the code or to raise the issue to the rest of the team. There’s nothing more exciting than realizing that a merge conflict revealed conflicting requirements.

5. Setup config management early

I’ve seen several Drupal 8 teams delay in setting up a deployment workflow that integrates with Drupal 8’s configuration management. Instead, deployments involve pushing code and manual UI work, clicking changes together. Then, developers pull down the production database to keep up to date.

Unfortunately, manual configuration is prone to error. All it takes is one mistake, and valuable QA time is wasted. Also, it avoids code review of configuration, which is actually possible and enjoyable with Drupal 8’s YAML configuration exports.

The nice thing about configuration management tooling is it typically doesn’t have any dependency on your actual site requirements. This includes:

  • Making sure each environment pulls in updated configs on deployment
  • Aborting deployments and rolling back if config imports fail
  • Getting the development team comfortable with config basics
  • Setting up the secure use of API keys through environment variables and settings.php.

Doing these things early will pay off tenfold during development.

6. Secure sites early

I recently worked on a site that was only a few weeks away from the production launch. The work was far enough along that the site was available outside of the corporate VPN under a “beta” subdomain. Much to my surprise, the site wasn’t under HTTPS at all. As well, the Drupal admin password was the name of the site!

These weren’t things that the team had forgotten about; but, in the rush of the last few sprints, it was clear the two issues weren’t going to be fixed until a few days before launch. HTTPS setup, in particular, is a great example of an early setup task. Even if you aren’t on your production infrastructure, set up SSL certificates anyway. Treat any new environments without SSL as a launch blocker. Consider using Let's Encrypt if getting proper certificates is a long task.

This phase is also a good chance to make sure admin and editorial accounts are secure. We recommend that the admin account password is set to a long random string—and then, don’t save or record the password. This eliminates password sharing and encourages editors to use their own separate accounts. Site admins and ops can instead use ssh and drush user-login to generate one-time login links as needed.

7. Make downsyncs normal

Copying databases and file systems between environments can be a real pain, especially if your organization uses a custom Docker-based infrastructure. rsync doesn’t work well (because most Docker containers don’t run ssh), and there may be additional networking restrictions that block the usual sql-sync commands.

This leads many dev teams to really hold off on pulling down content to lower environments because it’s such a pain to do. This workflow really throws QA and developers for a loop, because they aren’t testing and working against what production actually is. Even if it has to be entirely custom, it’s worth automating these steps for your environments. Ideally, it should be a one-button click to copy the database and files from one environment to a lower environment. Doing this early will improve your sprint velocity and give your team the confidence they need in the final weeks before launch.

8. Validate deployments

When deploying new code to an environment, it’s important to fail builds if something goes wrong. In a typical Drupal site, you could have errors during:

  • composer install
  • drush updatedb
  • drush config-import
  • The deployment could work, but the site could be broken and returning HTTP 500 error codes

Each deployment should capture the deployment logs and store them. If any step fails, subsequent steps should be aborted, and the site rolled back to its previous state. Speaking of…

9. Automate backups and reverts

When a deployment fails, it should be nearly automatic to revert the site to the pre-deployment state. Since Drupal updates involve touching the database and the file system, those should both be reverted. Database restores tend to be fairly straight forward, though filesystem restores can be more complex if they are stored on S3 or some other service. If you’re hosted on AWS or a similar platform, use their APIs and utilities to manage backups and restores where possible. They have internal access to their systems, making backups much more efficient. As a side benefit, this helps make downsyncs more robust, as they can be treated as a restore of a production backup instead of a direct copy.

10. Remember #cache

Ok, I suppose I mean “remember caching everywhere,” though in D8 it seems like render cache dependencies are what’s most commonly forgotten. It’s so easy to fall into Drupal 7 patterns, and just create render arrays as we always have. After all, on locals, everything works fine! But, forgetting to use addCacheableDependencies on render arrays leads to confusing bugs down the line.

Along the same lines, it’s important to set up invalidation caching early in the infrastructure process. Otherwise, odds are you’ll get to the production launch and be forced to rely on TTL caches simply because the site wasn’t built or tested for invalidation caching. It’s a good practice when setting up a reverse proxy to let Drupal maintain the caching rules, instead of creating them in the proxy itself. In other words, respect Cache-Control and friends from upstream systems, and only override them in very specific cases.

Finally, be sure to test on locals with caches enabled. Sure, disable them while writing code, but after turn them back on and check again. I find incognito or private browsing windows invaluable here, as they let you test as an anonymous user at the same time as being logged in. For example, did you just add a config form that changes how the page is displayed? Flip a setting, reload the page as anonymous, and make sure the update is instant. If you have to do a drush cache-rebuild for it to work, you know you’ve forgotten #cache somewhere.

What commandments did I miss in this list? Post below and let me know!

Header image from Control room of a power plant.

Categories: Blogs

myDropWizard.com: WIEGO: 6 years and 22,000 articles - a Drupal Non-Profit Case Study!

June 7, 2017 - 5:14pm

As part of our series discussing the use of Drupal in non-profits (click here to subscribe via e-mail), we recently reached out to one of our favorite clients, WIEGO, who candidly shared some of their struggles and successes.

Since re-launching their site on Drupal almost 6 years ago, they've grown from a site with 50 static pages, to a searchable, categorized repository of news and knowledge spanning over 22,000 articles!

In this case study, we gain some insights into how organizations like WIEGO decided on Drupal, have lived with some of the growing-pains, and are planning to move forward into the future!

Read more to find out!

Categories: Blogs

Palantir: Starwood Retail Partners

June 7, 2017 - 1:56pm
Starwood Retail Partners brandt Wed, 06/07/2017 - 11:56 Providing Flexibility for Future Growth

A robust style guide translated into a flexible Drupal 8 interface.

Highlights
  • Easy-to-use Drupal 8 interface

  • Robust style guide

  • Collaborative partnership with Petrick Design

We want to make your project a success.

Let's Chat. Our Client

Starwood Retail Partners owns 28 shopping malls and lifestyle centers across the United States. Unlike their competitors, Starwood Retail focuses on developing community centers instead of just shopping destinations. Their corporate website targets retailers and investors who are interested in leasing or developing their properties. The site provides information on the individual locations, as well as downloadable resources for potential investors.

Being on a tight timeline to launch the new site before an upcoming corporate event, Starwood Retail sought to replace their standard development partner. They had already contracted with another firm, Petrick Design, to provide creative support, but they needed a strategic development partner. After conversations with the Palantir team, Starwood Retail knew they had found the Drupal prowess they were looking for.

"We could tell the expertise we were getting in Drupal, and that we were going to have the necessary support to get all of the things we didn’t know we needed.” — Brian Price, Digital Marketing Manager

Goals and Direction

Starwood Retail felt that their site was lagging behind their competitors, and they wanted to do a full redesign in a way that would allow them to provide thorough information with an updated look. They wanted their website to inspire, engage, and “wow” visitors while advancing the company brand and culture – innovative, creative, fresh, and young but with tremendous experience. The site also needed to intuitively expedite the leasing process, showcase their centers as prime opportunities, reinforce their retail expertise, instill pride in current employees, and inspire potential employees and partners.

Simply put, none of the content on the old Starwood Retail site described what services they provided. It had information scattered across different pages in a way that made the information feel overwhelming, and the content was not organized at a property level. This made it extremely difficult to find location-specific information because all of their content was shown in massive lists.

The new site needed to achieve three primary goals:

  • Surface content to make it easier for marketers, future tenants and investors to find what they needed.
  • Tell a story about the services Starwood Retail provides.
  • Modernize the site by migrating to Drupal 8.
How We Helped Living Style Guide

Starwood Retail is a rapidly growing company, and they needed a site that had the flexibility to grow with them. We took the beautiful static designs provided by Petrick and extended them into a responsive style guide that informed the Drupal build. This robust browser-based style guide turned the design into components, so that new content can be published quickly as Starwood Retail continues to grow. This style guide now serves as a reference so that any future updates will still maintain the design system.

Flexibility in Drupal 8

After the style guide was created, it was translated into an easy-to-use Drupal interface. As we were building, we were able to show the Starwood Retail team how all of the components would come together, and we worked with them on help text, labels, and an organization that made sense to them. Since they were in on the process, it makes it easier for them to carry forward.

The new site is easier to understand and easier to populate. In the previous site, contact information would have to be updated in each location that it was present on the site. On the new site, if you update a piece of content, it will update that node across the whole site. Another example is that when a new mall page is added, that mall is automatically added to their location map (shown below).

Property map with advanced filtering abilities.The Results

The new Drupal 8 site has intuitive workflows and allows the editorial team to be more efficient as Starwood Retail grows. Not only does it have a modern look and feel, it’s easy to update. Editors know exactly where they need to go, because the site functions as they intended it to.

This project is a prime example of how a collaborative process can turn out well. Through constant communication and clearly identified trade-offs, even a very tight deadline was achieved.

“We have a great corporate website now that everyone is really proud of, and it functions exactly how we wanted it to.” - Brian Price, Digital Marketing Manager

We want to make your project a success.

Let's Chat. Drupal 8 Services development starwoodretail.com
Categories: Blogs

Palantir: GenomeWeb

June 7, 2017 - 1:29pm
GenomeWeb brandt Wed, 06/07/2017 - 11:29 Increasing Engagement Using Segmented Content

Using Domain Access to manage content between multiple sites.

Highlights
  • Multi-headed Drupal architecture

  • Audience segmentation using domain-specific registrations

  • Efficient editorial and user management workflows

We want to make your project a success.

Let's Chat. Our Client

GenomeWeb is an independent news organization that provides online reporting on genomic technologies. Historically they have focused on this very narrow niche of the bio industry, and they are the leading news site in that particular field. Their site has an active community with over 200,000 users and about 20 new articles being published daily.

Over time GenomeWeb saw that the technologies they were covering were moving very quickly into healthcare and diagnostics, and they wanted to expand their news coverage into the molecular diagnostics space.

Goals and Direction

Instead of adding new content directly to the existing site, GenomeWeb wanted to create a new sister site to be located at www.360Dx.com, which would include existing diagnostic content and also new coverage that could be marketed to a broader diagnostics audience. The new site would host less technical and more business-focused content, as well as share content with the current GenomeWeb site.

Goals for the new 360Dx site and multi-headed architecture:

  • Content from each site should be easily accessible for both sets of audiences.
  • New clinical content should only live on 360Dx.
  • Sites should keep the same user database. If someone is a user on GenomeWeb, they should have the same level of access on the new 360 site. This means paying for a premium level of access on one site would grant users premium access on the other.

“It was a very complex project. The site was already complicated to begin with.” — Bernadette Toner, CEO

How We Helped

To extend their business model to another site, Palantir used the Domain module suite to enable editors to assign content to both genomeweb.com and 360Dx.com. With Domain, the two sites can share some content and cross-promote articles to new audiences while having unique themes and settings.

The team developed a new derivative theme for 360Dx.com and ensured that content, users, and views were assigned to the proper domain. This work included analysis of existing modules and content, the creation and testing of update scripts, and configuration of domain-specific settings for analytics, ads, and other services. We also worked with the GenomeWeb team to integrate domains into their memberships, so that users could subscribe to email news bulletins from either or both sites independently.

The new site structure we created had very intuitive workflows, which meant the GenomeWeb team did not need extensive training to learn the new functionality. We worked to ease deployment and updates using the Features modules and through documentation of domain configurations.

The Results

The new multi-headed Drupal architecture created multiple wins for GenomeWeb. There is a wealth of content between their two sites, and by using Domain Access they are able to easily manage it all in one place. It has been easy for editors to post content and decide if it should go to one site or both, and there hasn’t been a huge change in their daily workflow.

The new architecture also allows GenomeWeb to engage with their audience on a deeper level: by having different kinds of registrations for each site, GenomeWeb is able to collect different demographics and target specific segments of their audience with more data. Although the site is still new, GenomeWeb has met their initial projections, and they anticipate being able to personalize their efforts even more as more data compiles.

“The new site works as we envisioned, which doesn’t always happen. The Palantir team listened to what we needed and was able to make it happen, and we are really, really happy with the results.” — Bernadette Toner, CEO

“The new site works as we envisioned, which doesn’t always happen. The Palantir team listened to what we needed and was able to make it happen, and we are really, really happy with the results.” Bernadette Toner, CEO

We want to make your project a success.

Let's Chat. Drupal 8 Services development genomeweb.com 360Dx.com
Categories: Blogs

Jacob Rockowitz: Crowdfunding does not help grow Drupal's community

June 7, 2017 - 1:07pm

First off, I want emphasize that the below blog post is my opinion and personal feelings as someone who has spent the past year building the Webform 8.x-5.x module for the Drupal community. Now I want to see it continue to grow and flourish. There are many thought leaders, including Dries, that have contemplated and publicly discussed the concept and benefits of crowdfunding and have used this approach to fund Drupal 8 core and module development.

Drupal 8 was, and still is, a monolithic accomplishment - one that continues to be an ambitious undertaking to maintain and improve. The Drupal community might still be waiting for Drupal 8 to be released if organizations did not crowdfund and accelerate D8. It is our togetherness, our pooling of our resources, that allows us to accomplish great things, like Drupal. At the same time, the Drupal community is made up of a network of relationships and collaborations. Drupal and Open-source’s success depends on its collaborative community, which is driven by relationships. Crowdfunding solves a big problem, pooling resources to fund open source, but it does not build relationships. Drupal's strength lies in its community, bonded together by healthy and productive relationships.

I feel that crowdfunding, especially within the Drupal contributed project space, is just handing out fish without teaching project maintainers how to fish or even companies how to properly hand out fish. Crowdfunding Drupal projects does not build relationships between project maintainers and organizations/companies. The most obvious issue is that crowdfunding typically has a limited number of fish. Conversely, dozens of companies are throwing fish, aka money, into a pool that is drained by project maintainers, who don't even know the origin of this particular fish. Finally, the most...Read More

Categories: Blogs

Bluespark Labs: When Drupal Met CARTO

June 7, 2017 - 11:50am

Drupal 8 is a powerful and customizable CMS.

It provides a lot of different tools to add, store, and visualize data, however spatial data visualization is a sophisticated and complicated topic — and Drupal hasn't always been the best option for handling it.

Because of its complexity, spatial data requires a specific process to become visual. We often think of a map with some pins or location points, but there are much more complex edge cases where Drupal is not able to solve the end user needs, such as rendering thousands of points or complex geometries in a map, or trying to create heatmaps based on stored data.

That’s why it's important to acknowledge that Drupal is not a golden hammer and the use of third party services will help us to provide a much better and appropriate user experience. This is where we introduce CARTO.

CARTO is a powerful spatial data analysis platform that provides different services related to the geographical information stored in a spatial database in the cloud. The fact that the base of all this process is a database table makes the connection between Drupal and CARTO simple and fairly straightforward.

From our point of view, two of the most useful tools provided by CARTO to be integrated within Drupal are the Import API and Builder. (There are some other ones that are interesting for more advanced users).

  • Import API allows you to upload files to a CARTO account, check on their current upload status, as well as delete and list importing processes on a given account.
  • CARTO Builder is a web-based drag and drop analysis tool for analysts and business users to discover and predict key insights from location data. Maps generated with this tool can be shared or embedded in any website.

So, at this point we have two systems — Drupal and CARTO — with the following features:

  • Drupal, a very capable tool to create, store, and establish relationships between content
  • CARTO, a powerful platform able to import spatial data, process it and generate amazing performance maps that can be shared
  • Drupal Media, an ecosystem that allows embedding and integrating external resources as entities

The problem is generating powerful maps and including them in any Drupal site.

First, the data stored in Drupal has to be pushed to CARTO. Then the maps are generated in CARTO before being embedded in Drupal. 

This can now easily be done using CARTO Sync and Media entity CARTO, both Drupal modules.

  • CARTO Sync allows Drupal Views to generate who's results can be pushed to CARTO to be processed
  • Media Entity CARTO integrates CARTO Builder shared maps within the Media ecosystem and allows to create Map entities to be referenced or embedded in any Drupal content

Following this method, we can still use Drupal as the CMS, while taking advantage of all the features that CARTO provides in order to represent accurate spatial information.

If you find this topic interesting, please take a look at the slides or recording from the presentation at DrupalCamp Madrid 2017.

Tags: Drupal PlanetCARTODrupal 8spatial datamapping
Categories: Blogs

Flocon de toile | Freelance Drupal: Create a mega menu with Drupal 8

June 7, 2017 - 7:00am

Creating a responsive mega menu is often a regular prerequisite on any project, Drupal 8 or other. And if we can find some solutions offering to create mega menus easily, very often these solutions remain quite rigid and can hardly be adapted to the prerequisites of a project. But what is a mega menu? It is nothing more than a menu that contains a little more than a list of links (proposed by the menu system of Drupal 8), with specific links, text, images, call to actions, etc.

Categories: Blogs

Sudhanshu Gautam | Blog: GSoC 2017 | Week 1: Port Vote Up/Down

June 7, 2017 - 3:11am
GSoC 2017 | Week 1: Port Vote Up/Down sudhanshu Wed, 06/07/2017 - 11:41
Categories: Blogs

OhTheHugeManatee: Stop Waiting for Feeds Module: How to Import RSS in Drupal 8

June 7, 2017 - 1:33am

How do you import an RSS feed into entities with Drupal 8? In Drupal 6 and 7, you probably used the Feeds module. Feeds 7 made it easy (-ish) to click together a configuration that matches an RSS (or any XML, or CSV, or OPML, etc) source to a Drupal entity type, maps source data into Drupal fields, and runs an import with the site Cron. Where has that functionality gone in D8? I recently had to build a podcast mirror for a client that needed this functionality, and I was surprised at what I found.

Feeds module doesn’t have a stable release candidate, and it doesn’t look like one is coming any time soon. They’re still surveying people about what feeds module should even DO in D8. As the module page explains:

It’s not ready yet, but we are brainstorming about what would be the best way forward. Want to help us? Fill in our survey.
If you decide to use it, don’t be mad if we break it later.

This does not inspire confidence.

The next great candidate is Aggregator module (in core). Unfortunately, Aggregator gives you no control over the kind of entity to create, let alone any kind of field mapping. It imports content into its own Aggregated Content entity, with everything in one field, and linking offsite. I suppose you could extend it to choose you own entity type, map fields etc, but that seems like a lot of work for such a simple feature.

Frustrating, right?

What if I told you that Drupal 8 can do everything Feeds 7 can?

What if I told you that it’s even better: instead of clicking through endless menus and configuration links, waiting for things to load, missing problems, and banging your head against the mouse, you can set this up with one simple piece of text. You can copy and paste it directly from this blog post into Drupal’s admin interface.

What? How?

Drupal 8 can do all the Feedsy stuff you like with Migrate module. Migrate in D8 core already contains all the elements you need to build a regular importer of ANYTHING into D8. Add a couple of contrib modules to provide specific plugins for XML sources and convenience drush functions, and baby you’ve got a stew goin’!

Here’s the short version Howto:

1) Download and enable migrate_plus and migrate_tools modules. You should be doing this with composer, but I won’t judge. Just get them into your codebase and enable them. Migrate Plus provides plugins for core Migrate, so you can parse remote XML, JSON, CSV, or even arbitrary spreadsheet data. Migrate Tools gives us drush commands for running migrations.

2) Write your Migration configuration in text, and paste it into the configuration import admin page (admin/config/development/configuration/single/import), or import it another way. I’ve included a starter YAML just below, you should be able to copypasta, change a few values, and be done in time for tea.

3) Add a line to your system cron to run drush migrate -y my_rss_importer at whatever interval you like.

That’s it. One YAML file, most of which is copypasta. One cronjob. All done!

Here’s my RSS importer config for your copy and pasting pleasure. If you’re already comfortable with migration YAMLs and XPaths, just add the names of your RSS fields as selectors in the source section, map them to drupal fields in the process section, and you’re all done!

If you aren’t familiar with this stuff yet, don’t worry! We’ll dissect this together, below.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 id: my_rss_importer label: 'Import my RSS feed' status: true source: plugin: url data_fetcher_plugin: http urls: 'https://example.com/feed.rss' data_parser_plugin: simple_xml item_selector: /rss/channel/item fields: - name: guid label: GUID selector: guid - name: title label: Title selector: title - name: pub_date label: 'Publication date' selector: pubDate - name: link label: 'Origin link' selector: link - name: summary label: Summary selector: 'itunes:summary' - name: image label: Image selector: 'itunes:image[''href'']' ids: guid: type: string destination: plugin: 'entity:node' process: title: title field_remote_url: link body: summary created: plugin: format_date from_format: 'D, d M Y H:i:s O' to_format: 'U' source: pub_date status: plugin: default_value default_value: 1 type: plugin: default_value default_value: podcast_episode

Some of you can just stop here. If you’re familiar with the format and the structures involved, this example is probably all you need to set up your easy RSS importer.

In the interest of good examples for Migrate module though, I’m going to continue. Read on if you want to learn more about how this config works, and how you can use Migrate to do even more amazing things…

Anatomy of a migration YAML

Let’s dive into that YAML a bit. Migrate is one of the most powerful components of Drupal 8 core, and this configuration is your gateway to it.

That YAML looks like a lot, but it’s really just 4 sections. They can appear in any order, but we need all 4: General information, source, destination, and data processing. This isn’t rocket science after all! Let’s look at these sections one at a time.

General information

1 2 3 id: my_rss_importer label: 'My RSS feed importer' status: true

This is the basic stuff about the migration configuration. At a minimum it needs a unique machine-readable ID, a human-readable label, and status: true so it’s enabled. There are other keys you can include here for fun extra features, like module dependencies, groupings (so you can run several imports together!), tags, and language. These are the critical ones, though.

Source

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 source: plugin: url data_fetcher_plugin: file urls: 'https://example.com/feed.rss' data_parser_plugin: simple_xml item_selector: /rss/channel/item fields: - name: guid label: GUID selector: guid - name: title label: Item Title selector: title - name: pub_date label: 'Publication date' selector: pubDate - name: link label: 'Origin link' selector: link - name: summary label: Summary selector: 'itunes:summary' ids: guid: type: string

This is the one that intimidates most people: it’s where you describe the RSS source. Migrate module is even more flexible than Feeds was, so there’s a lot to specify here… but it all makes sense if you take it in small pieces.

First: we want to use a remote file, so we’ll use the Url plugin (there are others, but none that we care about right now). All the rest of the settings belong to the Url plugin, even though they aren’t indented or anything.

There are two possibilities for Url’s data_fetcher setting: file and http. file is for anything you could pass to PHP’s file_get_contents, including remote URLs. There are some great performance tricks in there, so it’s a good option for most use cases. We’ll be using file for our example. http is specifically for remote files accessed over HTTP, and lets you use the full power of the HTTP spec to get your file. Think authentication headers, cache rules, etc.

Next we declare which plugin will read (parse) the data from that remote URL. We can read JSON, SOAP, arbitrary XML… in our use case this is an RSS feed, so we’ll use one of the XML plugins. SimpleXML is just what it sounds like: a simple way to get data out of XML. In extreme use cases you might use XML instead, but I haven’t encountered that yet (ever, anywhere, in any of my projects). TL;DR: SimpleXML is great. Use it.

Third, we have to tell the source where it can find the actual items to import. XML is freeform, so there’s no way for Migrate to know where the future “nodes” are in the document. So you have to give it the XPath to the items. RSS feeds have a standardized path: /rss/channel/item.

Next we have to identify the “fields” in the source. You see, migrate module is built around the idea that you’ll map source fields to destination fields. That’s core to how it thinks about the whole process. Since XML (and by extension RSS) is an unstructured format – it doesn’t think of itself as having “fields” at all. So we’ll have to give our source plugin XPaths for the data we want out of the feed, assigning each path to a virtual “field”. These “fake fields” let Migrate treat this source just like any other.

If you haven’t worked with XPaths before, the example YAML in this post gives you most of what you need to know. It’s just a simple text system for specifying a tag within an unstructured XML document. Not too complicated when you get into it. You may want to find a good tutorial to learn some of the tricks.

Let’s look at one of these “fake fields”:

1 2 3 name: summary label: Summary selector: 'itunes:summary'

name is how we’ll address this field in the rest of the migration. It’s the source “field name”. label is the human readable name for the field. selector is the XPath inside the item. Most items are flat – certainly in RSS – so it’s basically just the tag that surrounds the data you want. There, was that so hard?

As a side note, you can see that my RSS feeds tend to be for iTunes. Sometimes the world eats an apple, sometimes an apple eats the world. Buy me a beer at Drupalcon and we can argue about standards.

Fifth and finally, we identify which “field” in the source contains a unique identifier. Migrate module keeps track of the association between the source and destination objects, so it can handle updates, rollbacks, and more. The example YAML relies on the very common (but technically optional) <guid> tag as a unique identifier.

Destination

1 2 destination: plugin: 'entity:node'

Yep, it’s that simple. This is where you declare what Drupal entity type will receive the data. Actually, you could write any sort of destination plugin for this – if you want Drupal to migrate data into some crazy exotic system, you can do it! But in 99.9% of cases you’re migrating into Drupal entities, so you’ll want entity:something here. Don’t worry about bundles (content types) here; that’s something we take care of in field mapping.

Process

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 process: title: title field_remote_url: link body: summary created: plugin: format_date from_format: 'D, d M Y H:i:s O' to_format: 'U' source: pub_date status: plugin: default_value default_value: 1 type: plugin: default_value default_value: podcast_episode

This is where the action happens: the process section describes how destination fields should get their data from the source. It’s the “field mapping”, and more. Each key is a destination field, each value describes where the data comes from.

If you don’t want to migrate the whole field exactly as it’s presented in the source, you can put individual fields through Migrate plugins. These plugins apply all sorts of changes to the source content, to get it into the shape Drupal needs for a field value. If you want to take a substring from the source, explode it into an array, extract one array value and make sure it’s a valid Drupal machine name, you can do that here. I won’t do it in my example because that sort of thing isn’t common for RSS feeds, but it’s definitely possible.

The examples of plugins that you see here are simple ones. status and type show you how to set a fixed field value. There are other ways, but the default_value plugin is the best way to keep your sanity.

The created field is a bit more interesting. The Drupal field is a unix timestamp of the time a node was authored. The source RSS uses a string time format, though. We’ll use the format_date plugin to convert between the two. Neat, eh?

Don’t forget to map values into Drupal’s status and type fields! type is especially important: that’s what determines the content type, and nodes can’t be saved without it!

That’s it?

Yes, that’s it. You now have a migrator that pulls from any kind of remote source, and creates Drupal entities out of the items it finds. Your system cron entry makes sure this runs on a regular schedule, rather than overloading Drupal’s cron.

More importantly, if you’re this comfortable with Migrate module, you’ve just gained a lot of new power. This is a framework for getting data from anywhere, to anywhere, with a lot of convenience functionality in between.

Happy feeding!

Tips and tricks

OK I lied, there is way more to say about Migrate. It’s a wonderful, extensible framework, and that means there are lots of options for you. Here are some of the obstacles and solutions I’ve found helpful.

Importing files

Did you notice that I didn’t map the images into Drupal fields in my example? That’s because it’s a bit confusing. We actually have an image URL that we need to download, then we have to create a file entity based on the downloaded file, and then we add the File ID to the node’s field as a value. That’s more complicated than I wanted to get into in the general example.

To do this, we have to create a pipeline of plugins that will operate in sequence, to create the value we want to stick in our field_image. It looks something like this:

1 2 3 4 5 6 7 8 9 field_image: - plugin: download source: - image - constants/destination_uri rename: true - plugin: entity_generate

Looking at that download plugin, image seems clear. That’s the source URL we got out of the RSS feed. But what is constants/destination_uri, I hear you cry? I’m glad you asked. It’s a constant, which I added in the source section and didn’t tell you about. You can add any arbitrary keys to the source section, and they’ll be available like this in processing. It is good practice to lump all your constants together into one key, to keep the namespace clean. This is what it looks like:

1 2 3 4 source: ... usual source stuff here ... constants: destination_uri: 'public://my_rss_feed/post.jpg'

Before you ask, yes this is exactly the same as using the default_value plugin. Still, default_value is preferred for readability wherever possible. In this case it isn’t really possible.

Also, note that the download plugin lets me set rename: true. This means that in case of a name conflict, a 0, 1, 2, 3 etc will be added to the end of the filename.

You can see the whole structure here, of one plugin passing its result to the next. You can chain unlimited plugins together this way…

Multiple interrelated migrations

One of the coolest tricks that Migrate can do is to manage interdependencies between migrations. Maybe you don’t want those images just as File entities, you actually want them in Paragraphs, which should appear in the imported node. Easy-peasy.

First, you have to create a second migration for the Paragraph. Technically you should have a separate Migration YAML for each destination entity type. (yes, entity_generate is a dirty way to get around it, use it sparingly). So we create our second migration just for the paragraph, like this:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 id: my_rss_images_importer label: 'Import the images from my RSS feed' status: true source: plugin: url data_fetcher_plugin: http urls: 'https://example.com/feed.rss' data_parser_plugin: simple_xml item_selector: /rss/channel/item fields: - name: guid label: GUID selector: guid - name: image label: Image selector: 'itunes:image[''href'']' ids: guid: type: string constants: destination_uri: 'public://my_rss_feed/post.jpg' destination: plugin: 'entity:paragraph' process: type: plugin: default_value default_value: podcast_image field_image: - plugin: download source: - image - constants/destination_uri rename: true - plugin: entity_generate

If you look at that closely, you’ll see it’s a simpler version of the node migration we did at first. I did the copy pasting myself! Here are the differences:

  • Different ID and label (duh)
  • We only care about two “fields” on the source: GUID and the image URL.
  • The destination is a paragraph instead of a node.
  • We’re doing the image trick I just mentioned.

Now, in the node migration, we can add our paragraphs field to the “process” section like this:

1 2 3 4 field_paragraphs: plugin: migration_lookup migration: my_rss_images_importer source: guid

We’re using the migration_lookup plugin. This plugin takes the value of the field given in source, and looks it up in my_rss_images_importer to see if anything with that source ID was migrated. Remember where we configured the source plugin to know that guid was the unique identifier for each item in this feed? That comes in handy here.

So we pass the guid to migration_lookup, and it returns the id of the paragraph which was created for that guid. It finds out what Drupal entity ID corresponds to that source ID, and returns the Drupal entity ID to use as a field value. You can use this trick to associate content migrated from separate feeds, totally separate data sources, or whatever.

You should also add a dependency on my_rss_images_importer at the bottom of your YAML file, like this:

1 2 3 migration_dependencies: required: - my_rss_images_importer

This will ensure that my_rss_images_importer will always run before my_rss_importer.

(NB: in Drupal < 8.3, this plugin is called migration)

Formatting dates

Very often you will receive dates in a format other than what Drupal wants to accept as a valid field value. In this case the format_date process plugin comes in very handy, like this:

1 2 3 4 5 field_published_date: plugin: format_date from_format: 'D, d M Y H:i:s O' to_format: 'Y-m-d\TH:i:s' source: pub_date

This one is pretty self-explanatory: from format, to format, and source. This is important when migrating from Drupal 6, whose date fields store dates differently from 8. It’s also sometimes handy for RSS feeds. :)

Drush commands

Very important for testing, and the whole reason we have migrate_plus module installed! Here are some handy drush commands for interacting with your migration:

  • drush ms: Gives you the status of all known migrations. How many items are there to import? How many have been imported? Is the import running?
  • drush migrate-rollback: Rolls back one or more migrations, deleting all the imported content.
  • drush migrate-messages: Get logged messages for a particular migration.
  • drush mi: Runs a migration. use --all to run them all. Don’t worry, Migrate will sort out any dependencies you’ve declared and run them in the right order. Also worth noting: --limit=10 does a limited run of 10 items, and --feedback=10 gives you an in-progress status line every 10 items (otherwise you get nothing until it’s finished!).

Okay, now that’s really it. Happy feeding!

Categories: Blogs

Hook 42: Accessibility and Drupal Meetup

June 6, 2017 - 10:22pm

"The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect."
- Tim Berners-Lee, W3C Director and inventor of the World Wide Web

As a community, Drupal wants to be sure that the websites and the features we build are accessible to everyone, including those who have disabilities. To be inclusive we must think beyond color contrasts, font scaling, and alt texts. Identifying the barriers and resolving them is fundamental in making the web inclusive for everyone.

Accessibility fosters social equality and inclusion for not just those with disabilities but also those with intermittent internet access in rural communities and developing nations.

The Bay Area is fortunate to have Mike Gifford visiting from Canada and he carries with him unique perspectives on web accessibility. Hook 42 has organized an evening with Mike for conversation, collaboration, and thought leadership surrounding Drupal Accessibility.

Categories: Blogs

TimOnWeb.com: Add reCaptcha to your Drupal 7 forms programatically

June 6, 2017 - 7:44pm

If you want to add Google's reCaptcha (https://www.google.com/recaptcha/intro/index.html) to your Drupal 7 forms programmatically you need to follow these two steps:

1) Install and enable captcha (https://www.drupal.org/project/captcha) and recaptcha (https://www.drupal.org/project/recaptcha) modules. The best ...

Read now

Categories: Blogs

Elevated Third: Acquia Showcases Headless Drupal Development for Boreal Mountain Resort

June 6, 2017 - 3:59pm
Acquia Showcases Headless Drupal Development for Boreal Mountain Resort Acquia Showcases Headless Drupal Development for Boreal Mountain Resort Nate Gengler Tue, 06/06/2017 - 12:59

We recently launched our first decoupled Drupal site for Boreal Mountain Resort. Working closely with hosting platform, Acquia, and front end developers, Hoorooh Digital, we spun up rideboreal.com as a fully customized front end experience with the back-end framework of Drupal 8.

Our hosting partners, Acquia, recapped the build with a fantastic blog post. It offers an in-depth look at the working relationship between Elevated Third, Acquia and Hoorooh Digital.

There is always satisfaction in retracing the progression of a project from initial discovery to final site launch. But more than an excuse to pat ourselves on the back, reflecting on projects helps us improve. It gives us a sense of how we stack up against our original goals and provides context for future builds.
For more information on decoupled Drupal Development and other industry news, Acquia’s blog is an awesome resource. Check it out! 

 

 

 

Categories: Blogs

Droptica: Droptica: START YOUR ADVENTURE WITH DOCKER-CONSOLE IN THE EXAMPLE OF THE DRUPAL 7 PROJECT

June 6, 2017 - 10:02am
docker-console init --tpl drupal7 People who follow our blog already know that we’re using Docker at Droptica. We also already told you how easy it is to start a project using our docker-drupal application (https://www.droptica.pl/blog/poznaj-aplikacje-docker-drupal-w-15-minut-docker-i-przyklad-projektu-na-drupal-8/). Another step on the road to getting efficient and proficient with Docker is docker-console application, which is a newer version of docker-drupal, and exactly like its predecessor it was created in order to make building a working environment for Drupal simple and more efficient. How does it all work? You are going to see that in this write-up. Since we're all working on Linux (mainly on Ubuntu), all commands shown in this post were executed on Ubuntu 16.04.
Categories: Blogs

ThinkShout: Fade To Black - Responsive CSS Gradients

June 6, 2017 - 9:30am

Responsive design brings a fascinating array of challenges to both designers and developers. Using background images in a call to action or blockquote element is a great way to add visual appeal to a design, as you can see in the image to the left.



However, at mobile sizes, you’re faced with some tough decisions. Do you try and stretch the image to fit the height of the container? If so, at very tall/narrow widths, you’re forced to load a giant image, and it likely won’t be recognizable.

In addition, forcing mobile users to load a large image is bad for performance. Creating custom responsive image sets would work, but that sets up a maintenance problem, something most clients will not appreciate.

Luckily, there’s a solution that allows us to keep the image aspect ratio, set up standard responsive images, and it looks great on mobile as well. The fade-out!

I’ll be using screenshots and code here, but I’ve also made all 6 steps available on CodePen if you want to play with the code and try out different colors, images, etc…



Let’s start with that first blockquote:

(pen) This is set up for desktop - the image aspect ratio is determining the height of the container using the padding ratio trick. Everything in the container is using absolute positioning and flexbox for centering. We have a simple rgba() background set using the :before pseudo-property in the .parent-container:

:before { content: ""; display: block; position: absolute; width: 100%; height: 100%; background-color: rgba(0,0,0,0.4); z-index: 10; top: 0; }



(pen) The issues arise once we get a quote of reasonable length, and/or the page width gets too small. As you can see, it overflows and breaks quite badly.



(pen) We can fix this by setting some changes to take place at a certain breakpoint, depending on the max length of the field and the size of the image used.

Specifically, we remove the padding from the parent element, and make the .content-wrapper position: static. (I like to set a min-height as well just in case the content is very small)



(pen) Now we can add the fader code - background-image: linear-gradient, which can be used unprefixed. This is inserted into the .image-wrapper using another :before pseudo-element:

:before { content: ""; display: inline-block; position: absolute; width: 100%; height: 100%; background-image: linear-gradient( // Fade over the entire image - not great. rgba(0, 0, 0, 0.0) 0%, rgba(255, 0, 0, 1.0) 100% ); };



(pen) The issue now is that the gradient covers the entire image, but we can fix that easily by adding additional rgba() values, in effect ‘stretching’ the part of the gradient that’s transparent:

:before { background-image: linear-gradient( // Transparent at the top. rgba(0, 0, 0, 0.0) 0%, // Still transparent through 70% of the image. rgba(0, 0, 0, 0.0) 70%, // Now fade to solid to match the background. rgba(255, 0, 0, 1.0) 100% ); }



(pen) Finally, we can fine-tune the gradient by adding even more rgba() values and setting the percentages and opacity as appropriate.

Once we’re satisfied that the gradient matches the design, all that’s left is to make the gradient RGBA match the .parent-container background color (not the overlay - this tripped me up for a while!), which in our case is supposed to be #000:


:before { background-image: linear-gradient( rgba(0, 0, 0, 0.0) 0%, rgba(0, 0, 0, 0.0) 70%, // These three 'smooth' out the fade. rgba(0, 0, 0, 0.2) 80%, rgba(0, 0, 0, 0.7) 90%, rgba(0, 0, 0, 0.9) 95%, // Solid to match the background. rgba(0, 0, 0, 1.0) 100% ); }

We’ll be rolling out sites in a few weeks with these techniques in live code, and with several slight variations to the implementation (mostly adding responsive images and making allowances for Drupal’s markup), but this is the core idea used.

Feel free to play with the code yourself, and change the rgba() values so that you can see what each is doing.

Categories: Blogs

InternetDevels: Using Node.js with Drupal: the time has come for some real-time magic!

June 6, 2017 - 8:38am

There is a real “elixir of vivacity” that can help your Drupal website or app come alive in a way it never has. Sound lucrative? You’ll discover the rest from today’s story. After a glimpse at combining Drupal with AngularJS, we are now moving on to another member of the JavaScript family that is rapidly gaining popularity — Node.js. Let’s discover the reasons for its recognition, the benefits of using Node.js with Drupal, and the tool that helps you bring them together.

Read more
Categories: Blogs

Agiledrop.com Blog: AGILEDROP: Top Drupal Blogs from May

June 6, 2017 - 3:29am
We hope you are informed as much as possible about Drupal things. We are trying to deliver them to you as much as possible. One of the ways is by looking at the best work from other authors from the past month. Therefore, here are the best Drupal blogs from May. We will start our list with Improvements and changes in Commerce 2.x by Sascha Grossenbacher. In this blog post, the author focuses on explaining some of the key differences in the new version of Drupal Commerce and how they affect developers and users. Our second choice is What makes DrupalCon different? from Dagny Evans. She… READ MORE
Categories: Blogs

Freelock : Added D8 Rules support to Matrix API

June 5, 2017 - 2:01pm

As of today, the Drupal Matrix API module now supports sending messages to a room via Rules. Now you can automatically configure notifications to Matrix rooms without touching any code!

This is useful if you want to get notified in a Matrix room of some event on your website, such as a new comment, a user registration, updated content, etc.

Rules is still in Alpha, and has some UI quirks, but it works fine.

DrupalMatrixDrupal 8Drupal PlanetIntegration
Categories: Blogs

Pages